1 / 39

DL_POLY: Software and Applications

DL_POLY: Software and Applications. I.T. Todorov & W. Smith ARC Group & CC Group CSED, STFC Daresbury Laboratory, Daresbury Warrington WA4 1EP, Cheshire, England, UK. Where is Daresbury?. Molecular Dynamics: Definitions.

glynn
Télécharger la présentation

DL_POLY: Software and Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DL_POLY: Software and Applications I.T. Todorov & W. Smith ARC Group & CC Group CSED, STFC Daresbury Laboratory, Daresbury Warrington WA4 1EP, Cheshire, England, UK

  2. Where is Daresbury?

  3. Molecular Dynamics: Definitions • Theoretical tool for modelling the detailed microscopic behaviour of many different types of systems, including gases, liquids, solids, surfaces and clusters. • In an MD simulation, the classical equations of motion governing the microscopic time evolution of a many body system are solved numerically, subject to the boundary conditions appropriate for the geometry or symmetry of the system. • Can be used to monitor the microscopic mechanisms of energy and mass transfer in chemical processes, and dynamical properties such as absorption spectra, rate constants and transport properties can be calculated. • Can be employed as a means of sampling from a statistical mechanical ensemble and determining equilibrium properties. These properties include average thermodynamic quantities (pressure, volume, temperature, etc.), structure, and free energies along reaction paths.

  4. DL_POLY Project Background • General purpose parallel (classical) MD simulation software • It was conceived to meet the needs of CCP5 - The Computer Simulation of Condensed Phases (academic collaboration community) • Written in modularised Fortran90 (NagWare & FORCHECK compliant) with MPI2 (MPI1+MPI-I/O) fully self-contained • 1994 – 2011: DL_POLY_2 (RD) by W. Smith & T.R. Forester • (funded for 6 years by EPSRC at DL) -> DL_POLY_CLASSIC • 2003 – 2011: DL_POLY_3 (DD) by I.T. Todorov & W. Smith • (funded for 4 years by NERC at Cambridge) -> DL_POLY_4 • Over 11,000 licences taken out since 1994 • Over 1000 registered FORUM members since 2005 • Available free of charge (under licence) to University researchers (provided as code) and at cost to industry

  5. DL_POLY_DD Development Statistics

  6. DL_POLY_DD Licence Statistics

  7. DL_POLY Licence Statistics

  8. DL_POLY Licence Statistics

  9. DL_POLY Licence Statistics

  10. DL_POLY Project Current State • January 2011: DL_POLY_2 -> DL_POLY_CLASSIC on a BSD type Licence (BS retired but supporting GUI and fixes) • October 2010: DL_POLY_3 -> DL_POLY_4 still under STFC Licence, over 1300 licences taken out since November 2010 • Rigid Body dynamics • Parallel I/O & netCDF I/O – NAG dCSE (IJB & ITT) • CUDA+OpenMP port (source, ICHEC) & MS Windows port (installers) • SPME processor grid freed from 2^N decomposition – NAG dCSE (IJB) • Load Balancer development (LJE, finished 30/03/2011) • Continuous Development of DL_FIELD (pdb to DLP I/O, CY)

  11. Current Versions • DL_POLY_4 (version 1.2) • Dynamic Decomposition parallelisation, based on domain decomposition but with dynamic load balancing • limits up to ≈2.1×109 atoms with inherent parallelisation. • Full force field and molecular description with rigid body description • Free format (flexible) reading with some fail-safe features and basic reporting (but fully fool-proofed) • DL_POLY Classic (version 1.6) • Replicated Data parallelisation, limits up to ≈30,000 atoms with good parallelisation up to 64 (system dependent) processors (running on any processor count) • Full force field and molecular description • Hyper-dynamics: Temperature Accelerated Dynamics & Biased Potential Dynamics, Solvation Dynamics – Spectral Shifts, Metadynamics, Path Integral MD • Free format reading but somewhat strict

  12. Rigid molecules Point ions and atoms Flexibly linked rigid molecules Polarisable ions (core+ shell) Rigid bond linked rigid molecules Flexible molecules Rigid bonds Supported Molecular Entities

  13. Force Field Definitions – I • particle: rigid ion or atom (charged or not), a core or a shell of a polarisable ion(with or without associated degrees of freedom), a massless charged site. A particle is a countable object and has a global ID index. • site: a particle prototype that serves to defines the chemical & physical nature (topology/connectivity/stoichiometry) of a particle (mass, charge, frozen-ness). Sites are not atoms they are prototypes! • Intra-molecular interactions:chemical bonds, bond angles, dihedral angles, improper dihedral angles, inversions. Usually, the members in a unit do not interact via an inter-molecular term. However, this can be overridden for some interactions. These are defined by site. • Inter-molecular interactions: van der Waals, metal (EAM, Gupta, Finnis-Sinclair, Sutton-Chen), Tersoff, three-body, four-body. Defined by species.

  14. Force Field Definitions – II • Electrostatics:Standard Ewald*, Hautman-Klein (2D) Ewald*, SPM Ewald (3D FFTs), Force-Shifted Coulomb, Reaction Field, Fennell damped FSC+RF, Distance dependent dielectric constant, Fuchs correction for non charge neutral MD cells. • Ion polarisationvia Dynamic (Adiabatic) or Relaxed shell model. • External fields: Electric, Magnetic, Gravitational ,Oscillating & Continuous Shear, Containing Sphere, Repulsive Wall. • Intra-molecular like interactions:tethers, core shells units, constraint and PMF units, rigid body units. These are also defined by site. • Potentials:parameterised analytical forms defining the interactions. These are always spherically symmetric! • THE CHEMICAL NATURE OF PARTICLES DOES NOT CHANGE IN SPACE AND TIME!!!

  15. Force Field by Sums

  16. Ensembles and Algorithms • Integration: • Available as velocity Verlet (VV) or leapfrog Verlet (LFV) generating flavours of the following ensembles • NVE • NVT (Ekin)Evans • NVT Andersen^, Langevin^, Berendsen, Nosé-Hoover • NPT Langevin^, Berendsen, Nosé-Hoover, Martyna-Tuckerman-Klein^ • NT/NPnAT/NPnTLangevin^, Berendsen, Nosé-Hoover, Martyna-Tuckerman-Klein^ • Constraints & Rigid Body Solvers: • VV dependent – RATTLE, No_Squish, QSHAKE* • LFV dependent – SHAKE, Euler-Quaternion, QSHAKE*

  17. M4 P4 M0 P0 M5 P5 M1 P1 M6 P6 M2 P2 M7 P7 M3 P3 Assumed Parallel Architecture DL_POLY is designed for homogeniousdistributed parallel machines

  18. Initialize Initialize Initialize Initialize Forces Forces Forces Forces Motion Motion Motion Motion Statistics Statistics Statistics Statistics Summary Summary Summary Summary Replicated Data B A C D

  19. P0Local force terms P1Local force terms Molecular force field definition Processors Global Force Field P2Local force terms Bonded Forces within RD

  20. RD Scheme for long-ranged part of SPME U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys., (1995), 103,8577 1. Calculate self interaction correction 2. Initialise FFT routine (FFT – 3D FFT) 3. Calculate B-spline coefficients 4. Convert atomic coordinates to scaled fractional units 5. Construct B-splines 6. Construct charge array Q 7. Calculate FFT of Q array 8. Construct array G 9. Calculate FFT of G array 10. Calculate net Coulombic energy 11. Calculate atomic forces

  21. Domain Decomposition B A C D

  22. P2Local atomic indices P0Local atomic indices P1Local atomic indices Global force field Tricky! Processor Domains Bonded Forces within DD Molecular force field definition

  23. DD Scheme for long-ranged part of SPME U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys., 103,8577(1995) 1. Calculate self interaction correction 2. Initialise FFT routine (FFT – IJB’s DaFT: 3M2 1D FFT) 3. Calculate B-spline coefficients 4. Convert atomic coordinates to scaled fractional units 5. Construct B-splines 6. Construct partial charge array Q 7. Calculate FFT of Q array 8. Construct partial array G 9. Calculate FFT of G array 10. Calculate net Coulombic energy 11. Calculate atomic forces I.J. Bush, I.T. Todorov, W. Smith, Comp. Phys. Commun., 175, 323 (2006)

  24. Performance Weak Scaling on IBM p575 2005-2011

  25. Rigid Bodies versus Constraints 450,000 particles with DL_POLY_4

  26. I/O Weak Scaling on IBM p575 2005-2007

  27. Benchmarking BG/L Jülich 2007

  28. Benchmarking XT4/5 UK 2010

  29. Benchmarking on Various Platforms

  30. Importance of I/O - I • Types of MD studies most dependent on I/O • Large length-scales (109 particles), short time-scale such as screw deformations • Medium big length-scales (106–108 particles), medium time-scale (ps-ns) such as radiation damage cascades • Medium length-scale (105–106 particles), long time-scale (ns-s) such as membrane and protein processes • Types of I/O: portable human readable loss of precision size • ASCII + + – – • Binary – – + + • XDR Binary + – + +

  31. Importance of I/O - II • Example: 15 million system simulated with 2048 MPI tasks • MD time per timestep ~0.7 (2.7) seconds on Cray XT4 (BG/L) • Configuration read ~100 sec. (once during the simulation) • Configuration write ~600 sec. for 1.1 GB with the fastest I/O method – MPI-I/O for Cray XT4 (parallel direct access for BG/L). • BG/L 16,000 MPI tasks – MD time per timestep 0.5 sec. with a configuration write a frame ~18,000 sec. • I/O in native binary is only 3-5 times faster and 3-7 times smaller • Some unpopular solutions • Saving only the important fragments of the configuration • Saving only fragments that have moved more than a given distance between two consecutive dumps • Distributed dump – separated configuration in separate files for each MPI task (CFD)

  32. I/O Solutions in DL_POLY_4 1. Serial read and write (sorted/unsorted) – where only a single MPI task, the master, handles it all and all the rest communicate in turn to or get broadcasted to while the master completes writing a configuration of the time evolution. 2. Parallel write via direct access or MPI-I/O (sorted/unsorted) – where ALL / SOME MPI tasks print in the same file in some orderly manner so (no overlapping occurs using Fortran direct access printing. However, it should be noted that the behaviour of this method is not defined by the Fortran standard, and in particular we have experienced problems when disk cache is not coherent with the memory). 3. Parallel read via MPI-I/O or Fortran 4. Serial NetCDF read and write using NetCDF libraries for machine-independent data formats of array-based, scientific data (widely used by various scientific communities).

  33. Performance for 216,000 Ions of NaCl on XT5

  34. MPI-I/O Write Performance for 216,000 Ions of NaCl on XT5

  35. MPI-I/O Read Performance for 216,000 Ions of NaCl on XT5

  36. DL_POLY Project Background • Rigid body dynamics and decomposition freed SPME • no topology and calcite potentials • Fully parallel I/O: reading and writing in ASCII, optionally including netCDF binary in AMBER format • CUDA (ICHEC) and Windows ports • New GUI (Bill Smith) • Over 1,300 licences taken out since November 2010 • DL_FILED field builder (Chin Yong) – 300 licencesc

  37. DL_FILED • AMBER & CHARM to DL_POLY • OPLSAA & Drieding to DL_POLY xyz, PDB DL_FIELD FIELD CONFIG ‘black box’ Protonated

  38. DL_POLY Roadmap • August 2011 – March 2012: PRACE-1IP-WP7 funds effort by ICHEC towards CUDA+OpenMP port, SC@WUT towards OpenCL+OpenMP port, and FZ Julich for FMP library testing • October 2011 – October 2012: EPSRC’s dCSE funds effort by NAG Ltd. • OpenMP within MPI vanilla • Beyond 2.1 billion particles • October 2011 – September 2012: 2 Temperature Thermostat Models, Fragmented I/O, On-the-Fly properties • November 2011 – September 2013: MMM@HPC, Gentle thermostat, Hyperdynamics

  39. Acknowledgements • Thanks to • Bill Smith (retired) • Ian Bush (NAG Ltd.) • Christos Kartsaklis (ORNL), Ruairi Nestor (ICHEC) • http://www.ccp5.ac.uk/DL_POLY/

More Related