html5-img
1 / 65

D. P. Landau Center for Simulational Physics The University of Georgia  Introduction and perspective Methodolog

D. P. Landau Center for Simulational Physics The University of Georgia  Introduction and perspective Methodology Monte Carlo Simulations Molecular dynamics simulations A few ‘characteristic’ examples * Summary and overview.

maegan
Télécharger la présentation

D. P. Landau Center for Simulational Physics The University of Georgia  Introduction and perspective Methodolog

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. D. P. Landau Center for Simulational Physics The University of Georgia Introduction and perspective • Methodology Monte Carlo Simulations Molecular dynamics simulations • A few ‘characteristic’ examples* • Summary and overview Croucher ASI on Frontiers in Computational Methods and Their Applications in Physical SciencesDec. 6 - 13, 2005 The Chinese University of Hong KongComputer Simulations in Classical Systems *There is only time to mention a few!

  2. Simulation NATURE Experiment Theory

  3. What is a Monte Carlo Simulation? . . . follow the "time dependence" of a model for which change is not deterministic (e.g. given by Newton's laws) but stochastic. Different sequences of random numbers should give results that agree to within “statistical error”. Approach is valid for many problems, e.g: • magnets • percolation • diffusion limited aggregation (DLA)  discovered on the computer!

  4. Some Physical Problems Amenable to Study with ClassicalMonte Carlo • Magnets. . . Phase transitions and critical behavior • Binary (AB) metallic alloys ordering, interdiffusion or unmixing kinetics. Jump rates depend on the local environment. Characteristic times between jumps >> atomic vibration periods.) • DLA growth of colloidal particles . . . their masses >> atomic masses  motion of colloidal particles in fluids is described by Brownian motion. • “Micelle formation” in microemulsions (water-oil-surfactant mixtures). Part of the “art” of simulation is the appropriate choice (or invention!) of a suitable (coarse-grained) model. • Polymer models, e.g. flexible polymers, polymer blends.

  5. What difficulties will we encounter? Limited computer time and memory Important... consider requirements of memory and cpu time BEFORE starting. Are there adequate resources now? Is a new algorithm needed?... Developing new strategies is itself exciting! Statistical and other errors • Computers have finite word length  limited precision. Truncation & round-off may produce problems. • Statistical errors come from finite sampling. Estimate them, then decide whether to simulate longer or use the cpu time to study the properties of the system under other conditions? • Systematic errors. An algorithm may not describe the physics properly, e.g. due to finite particle number, etc.

  6. Strategy • New simulations face hidden dangers. Begin with a simple program, small systems, and short runs. Test the program for special values of parameters for which the answers are known. Find parameter ranges of interest and any unexpected difficulties. Then, refine the program and increase running times. • Use BOTH cpu time and human time effectively. Don’t spend a month to rewrite a computer program that saves only a few minutes of cpu time.

  7. How do Simulations Relate to Theory and Experiment? • Theory might be available for a model with no good physical realization  compare with "data" from a simulation. Dramatic example: reactor meltdown . . . we want understanding but do not want to perform experiments! • Many physical systems are too complex for theory. If the simulation (playing the role of theory) disagrees with experiment, a new model is needed. An important advantage of simulations  different physical effects which are simultaneously present in real systems may be isolated and through separate consideration may provide better understanding.

  8. The “Art” of Random Number Generation Monte Carlo methods need fast, efficient production of random numbers. Physical processes, e.g. electrical circuit white noise, pre-calculated tables, etc. are too slow! Software algorithms are actually deterministic, producing "pseudo-random" numbers...this may be helpful, e.g. to test a program, want to compare the results with a previous run made using identical random numbers. • Poor quality random numbers  systematic errors!! In general, the random number sequences should be uniform, uncorrelated, and have a long period.

  9. Example: Plot points using consecutive pairs of random numbers for x and y. “bad” generator “good” generator

  10. Monte Carlo and Stat. Mech.- An Introduction • Phase transitions and critical phenomena are of great interest! How can we learn about them? • ThePartition functioncontains all thermodynamic information: • Lattice models are simple and amenable to high resolution study via computer simulation (magnets, lattice gauge models, polymers, etc.) • How do we get to long times, i.e. sample much of phase space?

  11. The Percolation problem • . . . a geometric problem in which random addition of objects can create a path which spans the entire system. • Site Percolation • Lattice sites are randomly occupied with probability p. • Connect occupied nearest neighbor sites to form clusters • Generate many (independent) realizations Pspan = probability of having a spanning (infinite) cluster. • The order parameter M is the fraction of occupied sites in the lattice which belong to the infinite cluster. • Percolation threshold - pc; (p-pc)(Tc-T) for a thermal transition. Simplest “infinite” cluster random infinite cluster

  12. Simple Sampling Monte Carlo The Partition functioncontains all information about a system Example: N Ising spins on a square lattice At low temperature, only two states contribute very much (i.e. all spins up or all spins down). Simple sampling is very inefficient since it is very unlikely to generate these two states! Use importance sampling instead. Remember, for an N=10,000 Ising model Z has 210,000terms!

  13. Single spin-flip sampling for the Ising model Produce the nth state from the mth state … relative probability is Pn /Pm need only the energy difference,i.e. E=(En-Em)between the states Any transition rate that satisfiesdetailed balanceisacceptable, usually the Metropolis form(Metropolis et al, 1953).W(mn) =  o-1 exp (-E/kBT), E > 0=  o-1 , E < 0 where  o is the time required to attempt a spin-flip.

  14. Metropolis Recipe: (1.) Choose an initial state (2.) Choose a sitei (3.) Calculate the energy changeE which results if the spin at site iis overturned (4.) Generate a random numberr such that 0 < r < 1 (5.) Ifr < exp(-E/kBT),flip the spin (6.) Go to the next site and go to (2) Critical Slowing Down!

  15. Some Practical Advice 1. In the very beginning, think! What problem do you really want to solve? What method and strategy are best suited to the study? 2. In the beginning think small! Use small lattices, short runs... search parameter space. 3. Test the random number generator! Find limiting cases  compare with your results. 4. Look at dependence on system size and run length! Use multiple sizes, run lengths  use scaling to analyze data 5. Calculate error bars! Search for both statistical and systematic errors. 6. Make a few very long runs! Check for hidden very long time scales

  16. Perspective:

  17. Perspective:

  18. Molecular Dynamics Methods Integrate Eqns. of Motion numerically,time step = t Simple method: expand, (I.) Improved method: Expand, -t is the expansion variable, (II.) Add(I.) and (II.)  and

  19. Other MD algorithms Predictor-corrector (multistep) methods use a larger time step, but not symplectic Trotter-Suzuki decomposition methods* use a much larger time step, symplectic These methods are microcanonical (NVE)…can add ‘thermostats’, etc. to study (NVT), (NPT), etc. * Same mathematics as the Trotter-Suzuki transformation for Quantum Monte Carlo, variables have different meanings

  20. Types of Computer Simulations Deterministic methods . . . (Molecular dynamics) Stochastic methods . . . (Monte Carlo)

  21. Boundary Conditions • Different boundary conditions  different physics

  22. Boundary Conditions • Different boundary conditions  different physics Extrapolate size to   bulk behavior

  23. Boundary Conditions • Different boundary conditions  different physics Introduce an interface parallel to the top “surface”  study interfacial phenomena

  24. Boundary Conditions • Different boundary conditions  different physics Study nanoparticles

  25. Boundary Conditions • Different boundary conditions  different physics AND, free edges on top, pbc on the sides Study surface critical behavior

  26. Single Spin-Flip Monte Carlo Method Typical spin configurations for the Ising square lattice with pbc T << Tc T~Tc T>>Tc

  27. Correlation times Define an equilibrium relaxation function(t) and i.e. diverges at Tc ! “Critical slowing down”

  28. Finite Size Effects • A scaling ansatz for the singular part of the free energy: • F(L,T) = L -(2-)/ F(L1/) • where  = (T-Tc )/Tc. • Choose the scaling variable x = L1/ because ~-~ L as TTc . (L“scales” with ; but can use L/L or L1/ as the argument of F.) Differentiate  • M = L-/Mo(L1/) magnetization •  = L/o(L1/) susceptibility • C = L/Co(L1/)specific heat • Mo(x), o(x), and Co(x) are scaling functions. Corrections to scaling and finite size scaling appear for small L and large .

  29. A case study: LL Ising square lattice with p.b.c. (from 1976) • The large scatter in the data is characteristic of early Monte Carlo work--the effort is within the reach of a PC today!

  30. Histogram reweighting • A MC simulation performed at T=ToNconfigurations with frequency proportional to the Boltzmann weight. A histogram H(E,M)/N ~ the equilibrium probability PK(E,M), i.e. • (I) • estimate for the true density of states W(E,M) • Thus : (II) • Use (II) to estimate W(E,M) • with K = (K0 - K ). Any function f(E,M) can be calculated: • <f(E,M)>K = f(E,M) PK (E,M)

  31. Static Critical Behavior - the 3-dim Ising Model Analyze Monte Carlo data with finite size scaling (K=J/kBT) • J/kTc = Kc(L)= Kc + aL-1/ (1 + bL -w + . . .) Rosengren conjecture tanhKc = (5 1/2-2)cos(/8) = 0.22165863 Kc = 0.2216576(22) Ferrenberg&Landau, 1991 = 0.2216546(10) Blöte et al, 1995

  32. Off-lattice, Grand canonical MC: Critical coexistence HCSW-hard-core square-well fluid RPM– restricted primitive model Semi-density jump*=(+ - -)/2 for a HCSW fluid(c*~0.3067)and for the RPM(c*~0.079) …dashed line has the Ising slope=0.32. = |T-Tc|/Tc (After Kim, Fisher, and Luijten, 2003)

  33. Critical slowing down: Dynamic Finite Size Scaling Correlation time • so at Tc , •  LzCritical Slowing Down • This is valid only as long as L is sufficiently large that corrections to finite size scaling do not become important. • For the Metropolis method, z~2.1

  34. Remember, an advantage of Monte Carlo methods is that you can devise new sampling schemes that overcome problems with ‘traditional’ methods!

  35. Cluster Flipping Methods Fortuin-Kasteleyn theorem. . . map a ferromagnetic Potts model onto a corresponding percolation model for which successive states are uncorrelated no critical slowing down! Theq-state Potts model : • Replace each pair of interacting spins by a bond with probability • Repeat for all pairs  lattice with bonds that connect some sites to form clusters with diverse sizes and shapes. • Erase the spins and randomly assign each cluster a spin value.

  36. Swendsen-Wang method • Choose an initial spin state. • Place bonds between each pair of spins with probability p. • Find all clusters i.e. connected networks of bonds. • Randomly assign a spin value to all sites in each cluster. • Erase bondsnew spin state. original spins clusters formed “decorated” clusters

  37. High T: clusters are small. • Low T: most nearest neighbors in the same state are in the same cluster  system oscillates between similar structures. • Near Tc: a rich array of clusters is produced and each configuration differs substantially from its predecessor critical slowing down is reduced! z ~2.1 for Metropolis ~ 0 in 2-dim and ~0.5 in 3-dim for SW(Wang, 1990)

  38. Wolff method Grow single clusters and flip them sequentially: • Randomly choose a site. • Draw bonds to all nearest neighbors that are in the same state with probability p = 1– e-K. Repeat, iteratively, to form a cluster of connected sites. • Flip the entire cluster of connected sites. • Choose another initial site and repeat the process. Wolff kinetics has a smaller prefactor and smaller dynamic exponent than does the Swendsen-Wang method.

  39. “Improved estimators” It may be possible to calculate a thermodynamic property using clusters . . . for some quantities “noise reduction” occurs, e.g. the susceptibility for O(N) models is given by the mean cluster size, i.e.,  =  <|C|> where |C| is the size of a cluster. The statistical error is less than that obtained from fluctuations in the order parameter (fluctuations due to small clusters cancel).

  40. N-fold way and extensions • The above methods are time-step drivenlow T problems! • In event driven algorithms (e.g., “N-fold Way”Bortz et al, 1975)a flip occurs at each step : For discrete spin models  only a few flipping probabilities Collect spins into lists; spins in each have equivalent local environments (Ising square lattice has N=10 classes). • Total probability of some class l spin flipping in a step is  number of spins in class l = total for all classes with l  M

  41. Generate a random number 0 < rn < QN class for the next flip, i.e., class M is chosen if QM-1 < rn <QM. • Chose another rn to pick a class M spin. • 3rd random number  time elapsed before flipping Find properties from lifetime weighted averages. At low T, the gain in performance can be huge! Generalize: “absorbing Markov chains” (Novotny, 1995)

  42. Classical spin models e.g.,the classical Heisenberg model with spin vectors The Metropolis method involves “spin-tilts” instead of “spin-flips” Over-relaxation method (Brown and Woch, 1987; Creutz, 1987) Precess the spin about the effective interaction field (due to neighbors) by an angle  using the equation of motion This is microcanonical (and also deterministic) but it decorrelates successive states. Combine with Metropolis  becomes canonical (example of a hybrid method) What about cluster-flipping?

  43. Wolff embedding trick and cluster-flipping Classical spin modelinhomogeneous Ising model (Wolff, 1989) • Choose a direction randomly in space. • Project spins onto that direction to form an Ising model with interactions that depend on the spin projections • Then, use cluster flipping • Reverse the components parallel to for all spins in a cluster to yield a new spin configuration. • Choose a new (random) direction and repeat.

  44. Variation on a Theme... Probability-Changing Cluster Algorithm (Tomita and Okabe, 2001) Goal: Tune the critical point automatically Extend the SW algorithm by increasing or decreasing the probability of forming bonds depending on whether or not the clusters are percolating. Recipe: 1. Choose an initial configuration and a value of p 2. Construct the Kasteleyn-Fortuin clusters using p. Check to see if the system is percolating. 3. Update the spins using the SW rule. 4. If the system was percolating, decrease p by p. If the system was not percolating, increasepbyp 5. Go to 2. Note: Start with some value of p and decrease it as the simulation proceeds.

  45. Test for the 2-dim Ising model

  46. Multicanonical Sampling P(E)may contain multiple maxima that are widely spaced in configuration space (1st order phase transitions, spin glasses, etc.)  Standard methods become “trapped”; infrequent transitions between maxima leads to ill determined relative weights of the maxima and the minima of P(E).  modify the single spin flip probability to enhance the probability of the “unlikely” states between the maxima  accelerates effective sampling!

  47. How to getHeff : Measure (via standard Monte Carlo) where it is easy and use it as an estimate for a 2nd run made closer to the interesting region. Continue to the “unknown” region where standard sampling fails. Example: A q=7 Potts model (After Janke, 1992).

  48. A case study: Gaussian Spin Glass in 3-dim Finite size scaling plot: kBTSG/J =0.16  =1.1 Finite lattice spin glass correlation length After Lee and Young (2003)

  49. Reverse Monte Carlo: • Choose an initial (random?) state • Exchange a pair of particles if the sum of the squares of the deviations to e.g. the radial distribution function is reduced • Continue until ‘converged’

More Related