1 / 42

Computational Physics ( Lecture 9)

Computational Physics ( Lecture 9). PHY4370. Quadratic form. A quadratic form is simply a scalar, quadratic function of a vector with the form A is the matrix, x and b are vectors, c is a scalar constant. If A is symmetric and positive definite, f(x) is minimized by the solution to Ax=b.

Télécharger la présentation

Computational Physics ( Lecture 9)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Physics(Lecture 9) PHY4370

  2. Quadratic form • A quadratic form is simply a scalar, quadratic function of a vector with the form • A is the matrix, x and b are vectors, c is a scalar constant. If A is symmetric and positive definite, f(x) is minimized by the solution to Ax=b.

  3. Sample Problem: • c= 0

  4. The gradient • ,…,=-b. • If A is symmetric, the equation becomes: Ax-b.

  5. Revisit of Steepest descent • We start from an arbitrary starting point: x(0), • Slide down to the bottom of the paraboloid. • When we take the step, we choose the direction in which f decreases most quickly. • -f’(x(i))=b-Ax(i). • Error: e(i) = x(i)-x => how far from the solution. • Residual: r(i) = b-Ax(i) => how far from the correct value of b.

  6. Suppose we start from (-2, -2). Along the steepest descent, will fall somewhere on the solid line. • X(1) = x(0)+αr(0) • How big the step we should take? • Line search • Choose α to minimize f. df(x(1))/dα=0, • So, f’(x(1))Tr(0)=0 • So α is chosen to make r(0) and f’(x(1)) orthogonal!

  7. Determine α • F(x(1))= -r(1) • r(1)Tr(0) = 0 • (b-Ax(1))Tr(0) = 0 • (b-A(x(0)+αr(0)))Tr(0) = 0 • (b-Ax(0))Tr(0) = α(Ar(0))Tr(0) • r(0) Tr(0)=αr(0)T(Ar(0)) • α =(r(0) Tr(0))/(r(0)TAr(0))

  8. General method of Steepest Descent

  9. Conjugate directions • SD often takes steps in the same direction! • If we take n orthogonal steps, each step has the correct length, afte n steps, we are done! • In general, for each step, we have • x(i+1)=x(i)+α(i)d(i) • e(i+1) should be orthogonal to d(i) • d(i)Te(i+1) = 0 • d(i)T (e(i)+ α(i)d(i))=0 • α(i)=-d(i) Te(i)/(d(i) Td(i))

  10. Useless! We don’t know e(i). • Instead, we make d(i) and d(j) A-orthogonal, or conjugate. • d(i)TAd(j) = 0

  11. New requirement: • e(i+1) is A-orthogonal to d(i) • Like SD, we need to find the minimum along d(i)

  12. This time, evaluate the α(i) again,

  13. Conjugate Gradients • Just the Conjugate directions method where the search directions are constructed by the conjugation of the residuals. • ui=r(i)

  14. General Method

  15. Sample code for SD

  16. Revised Algorithm using line minimization for nonlinear functions

  17. Multi variable Newton method • f(x) = 0, (f = ( f1, f2, . . . , fn) and x = (x1, x2, . . . , xn).) • Taylor expansion at Xr: • f(xr) = f(x) + x ·∇f(x) + O(x2) ≃0, • AΔx = b, • Aij= ∂ fi(x)/∂xj • Bi=-fi • For Newtonian method: • Xk+1 = xk+ Δxk • Secant Method • Aij= (fi(x + hj<xj>) − fi(x))/hj • hj≃δ0 xj • δ0 isthe tolerance of the floating point of data. • f1(x, y) = exp(x^2) lny − x^2 = 0 and f2(x, y) = exp(y2) lnx − y^2 = 0, around x = y = 1.5.

  18. // An example of searching for the root via the secant method // for f_i[x_k] for i=1,2,...,n. import java.lang.*; public class Rootm { public static void main(String argv[]) { int n = 2; intni = 10; double del = 1e-6; double x[] = {1.5, 1.5}; secantm(ni, x, del); // Output the root obtained System.out.println("The root is at x = " + x[0] + "; y = " + x[1]); } // Method to carry out the multivariable secant search. public static void secantm(intni, double x[], double del) { int n = x.length; double h = 2e-5; int index[] = new int[n]; double a[][] = new double[n][n]; int k = 0; double dx = 0.1; while ((Math.abs(dx)>del) && (k<ni)) { double b[] = f(x); for (int i=0; i<n; ++i) { for (int j=0; j<n; ++j) { double hx = x[j]*h; x[j] += hx; double c[] = f(x); a[i][j] = (c[i]-b[i])/hx; } } dx = Math.sqrt(dx/n); k++; } if (k==n) System.out.println("Convergence not" + " found after " + ni + " iterations"); } public static double[] solve(double a[][], double b[], int index[]) {...} public static void gaussian(double a[][], int index[]) {...} // Method to provide function f_i[x_k]. public static double[] f(double x[]) { double fx[] = new double[2]; fx[0] = Math.exp(x[0]*x[0])*Math.log(x[1]) -x[0]*x[0]; fx[1] = Math.exp(x[1])*Math.log(x[0])-x[1]*x[1]; return fx; } }

  19. Markov Chain (Review) • Construct a process, from any micro state of the system, it migrates to a new state. • We use xito show the microstate. If we start from x0, this process will create a sequence of states: • x1,x2,, xi,  , these state form a chain. • Markov process is a process that each state is only related to the previous state. • From state rto state s, the transfer probability is • w(xr!xs). Markov process creates Markov chain. • Also called memoryless chain. • To realize normal distribution, we can constructthe Markov chain, so that no matter which state we start from, there exists a large number M, after we discard M states, the remainder still satisfies normal distribution.

  20. If w(xr!xs) follows the equilibrium condition: • P(x)is the desired distribution. Here it is normal distribution. • To prove it, we consider N parallel Markov chains, for a certain step, • Nrchains are in rth state, Nschains are in sstate. • Therefore, the number of the next state that is from r to s • Is: • From s to r is: • The number of the net transfer states is:

  21. If w(xr!xs)follows the equilibrium condition, • This is a very important result, which indicates that if there are two states that • don’t satisfy normal distribution, the Markov process will tend to help it to satisfy • the normal distribution.

  22. Normal sampling: • Choose a transfer probability that satisfies normal distribution; • Create a Markov chain, throw away Mstates; • Calculate the physical quantities of the remaining states. • This algorithm was proposed by Metropolis in the 50’s. • Metropolis algorithm. • Suppose from rstate to sstate, if the energy difference is • Then:

  23. Metropolis algorithm: • Another popular choice: • The choice of wis not unique. • However, different wsconverge very differently. How to choose a good • W is still the frontier of Monte Carloalgorithm.

  24. Back to Ising model Isingmodel: Ising is a very simple model proposed by Lenz in 20’s. Isinggave the solution and proved that no phase change in 1D Ising model. Onsager found the 2D solution in 1944 and calculated the phase change temperature. C. N. Yang solved the 2D Ising model when the h is very small.

  25. In Ising model, the common interests are energies, Order parameter, fluctuation of energies and fluctuation of order parameters.

  26. One algorithm: ♦ Choose a lattice point I, flip the spin. ♦ calculate the energy difference H. ♦ calculate the transfer probabilityw. ♦ create uniformly distributed random number[0,1] ♦If <w, flip the spin, Otherwise, no change. ♦Calculate the physical properties.

  27. Discusssion The choice of lattice point i: could be different. Two common approaches are: choose the point following the order of lattice index or randomly choose the point. To randomly choose i, we have to make sure that each lattice point is visited for the same number of times. EachMonte Carlo Step (MCS) is that we visit each point once. Thousands or millions of steps will be performed in this calculation. Since there exists only one spin flip between the current state and the previous state, the physical properties are highly correlated. Therefore, we don’t have to calculate the physical quantities for each step. Energy difference calculation is time consuming. To Ising model, the energy difference can be only a few values. We can calculate them before the simulation and store them. This is a typical strategy for many MC calculations.

  28. Homework 2 (Due on Oct. 13) • Rewrite the F-L program (which is in java) in the lecture notes in C or Fortran and use the revised program to calculate the inversion of a 4x4 matrix (you can just create the 4x4 matrix by yourself). • Write a piece of simple code of 2D random walk, for each step, the step length is a. Plot your first 20 steps. From the simulation results, estimate the relationship of the average distance between the starting point to end point and the total steps N after N steps. Hint: take the LOG Distance vs. LOG N

  29. Proj A, part 4 (Due two weeks after the Lab) 1. Write a subroutine for total energy calculation implementing the Lennard-Jones potential. 2. Write a subroutine for total energy calculation implementing one of the following potentials at your preference: a.) EAM potential for metals, b.) SW or Tersoff potential for Si, or C.) Ewald summation of Coulomb potential for ionic solids. You need to check literatures for all the related parameters. Also make sure to include your references in your report.

More Related