1 / 24

INAF Osservatorio Astrofisico di Catania

INAF Osservatorio Astrofisico di Catania. “ScicomP 9” Bologna March 23 – 26 2004. Using LAPI and MPI-2 in an N-body cosmological code on IBM SP. M. Comparato , U. Becciani, C. Gheller, V. Antonuccio. The N-Body project at OACT The FLY code Performance analysis with LAPI and MPI-2

diem
Télécharger la présentation

INAF Osservatorio Astrofisico di Catania

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. INAF Osservatorio Astrofisico di Catania “ScicomP 9” Bologna March 23 – 26 2004 Using LAPI and MPI-2 in an N-body cosmological code on IBM SP M. Comparato, U. Becciani, C. Gheller, V. Antonuccio • The N-Body project at OACT • The FLY code • Performance analysis with LAPI and MPI-2 • Questions

  2. INAF Osservatorio Astrofisico di Catania FLY Project • People: V. Antonuccio, U. Becciani, M. Comparato, D. Ferro • Funds: Included in the project “Problematiche Astrofisiche Attuali ed Alta Formazione nel Campo del Supercalcolo” funded by MIUR with more than 500,000 Euros + 170,000 Euros • INAF provides grants on MPP systems at CINECA • Resources: IBM SP, SGI Origin systems, Cray T3E

  3. INAF Osservatorio Astrofisico di Catania INAF Astrophysical Observatory of Catania 24 Processors 222 Mhz Global RAM Memory:48 Gbytes Disk Space:254 GB (72.8 GB HD per node + 36.2 GB HD cws) Network Topology:SPS scalable Omega switch and FastEthernet node interconnection type Bandwidth:300 Mbyte/s peak bi-directional transfer rate Programming Language:C, C++, Fortran 90 Parallel paradigms:OpenMP, MPI, LAPI IBM SP POWER3

  4. INAF Osservatorio Astrofisico di Catania INAF Astrophysical Observatory of Catania NEW SYSTEM 8 Processors 1.1 GHz Global RAM Memory:16 Gbytes Disk Array:254 GB L2: 1.5 Mbytes L3: 128 Mbyte Memory: 2 GB per processor IBM POWER4 P650

  5. Gravitational N-body problem The cosmological simulation With this method matter is represented as a set of particles: • each particle is characterized by mass, position and velocity, • the only force considered is gravity, • the evolution of the system is obtained integrating numerically the equation of motion on a proper time interval The N-body technique allows to perform cosmological simulation that describe the evolution of the universe.

  6. Gravitational N-body problem The cosmological simulation • Direct interaction (P-P) method is conceptually the simplest. • … but it scale as O(N2) and this makes impossible to run simulations with more than N >= 105 particles. • to overcome this problem tree- or mesh-based algorithms have been developed, these scale as NlogN and N • but, only supercomputers, and parallel codes, allow the user to run simulations with N>= 107 particles

  7. FLY: Parallel tree N-body code for Cosmological Applications • Based on the Barnes-Hut Algorithm (J. Barnes & P. Hut, Nature, 324, 1986) • Fortran 90 parallel code • High Performance Code for MPP/SMP architecture using one-side communication paradigm: SHMEM – LAPI • It runs on Cray-T3E System, SGI ORIGIN, and on IBM SP. • Typical Simulations require 350 MB Ram for 1 million particles

  8. Gravitational N-body problem The Barnes-Hut algorithm The particles evolve according to the laws of Newtonian physics wheredij = xi - xj Considering a region  the force component on the i-particle may be computed as:

  9. Gravitational N-body problem Tree Formation root 2D Domain decomposition and tree structure. The split of each sub-domain is carried out until only one body (a leaf) is contained in each box

  10. Cellsize £ q If mark the cell d - i cm Gravitational N-body problem Force Computation • Two phases: • tree walk procedure • force computation The force on any particle is computed as the sum of the forces by the nearby particles plus the force by the distant cells whose mass distributions are approximated by multipole series truncated, typically, at the quadrupole order.

  11. FLY Block Diagram SYSTEM INIZIALIZATION TREE FORMATION and BARRIER FORCE COMPUTATION TREE INSPECTION TIME STEP CYCLE ACC. COMPONENTS BARRIER UPDATE POSITIONS and BARRIER STOP

  12. Parallel implementation of FLY Data distribution • Two main data structures: • particles • tree Particles are distribuited in blocks such thateach processor has the same number of contiguous bodies. E.g. with 4 processors: 1 2 3 4 5 6 7 8 9 ... ... ... ... Tree structure is distribuited in a cyclic way such thateach processor has the same number of cells 13 10 14 1 5 9 2 6 3 ... ... ... ...

  13. Parallel implementation of FLY Work distribution • Each processor calculates the force for its local particles. • To do that the whole tree structure (which is distributed among processors) must be accessed asyncronously (one-side communications required) • This leads to a huge communication overhead

  14. FLY: Tips and tricks In order to lower the problems related to communication overhead, we have implemented several “tricks” Dynamical Load Balancing: processors help each other Grouping: close particles have the same interaction with far distributions of mass Data Buffering

  15. FLY: Data buffering Data buffering:free RAM segments are dynamically allocated to store remote data (tree cell properties and remote bodies) already accessed during the tree walk procedure. Performance improvement: 16 Million bodies on Cray T3E, 32 PEs 156 Mbytes for each PE Without Buffering Each PE executes 700 GET operations for each local body Using Buffering Each PE execute ONLY 8 - 25 GET operations for each local body

  16. Why FLYing from LAPI to MPI-2 • LAPI is a propretary parallel programming library (IBM) • Implementing FLY using MPI-2 improves the portability of our code • RMA calls introduced in MPI-2 make the porting simple, since there is a direct correspondence between basic functions mpi_get(…) mpi_put(…) lapi_get(…) lapi_put(…) • MPI-2 doesn’t have an atomic fetch and increment call

  17. MPI-2 syncronization • MPI_Win_lock and MPI_Win_unlock mechanism • MPI_Win_fence mechanism • when we can separate non-RMA access from RMA access • when all the processors access remote data at the same time • when we need the data just after the call • when only one processor access remote data

  18. MPI-2 syncronization • FLY algorithm requires continuous asyncronous access to remote data • passive target syncronization is needed • we have to use the lock/unlock mechanism.

  19. MPI-2 Drawback Unfortunately, Lock and unlock are usually not implemented efficiently (or they are not even implemented at all) • LAM: not implemented • MPICH: not implemented • MPICH2: I am waiting next release • IBM AIX: poor performance • IBM Turbo MPI2: testing phase

  20. FLY 3.3 Problem: poor performance of lock/unlock calls Walk-around: rewrite portion of code (where possible) to separate non-RMA accesses from RMA accesses in order to use the fence calls Result: MPI-2 version runs 2 times faster Why don’t we port these changes on the LAPI version? FLY 3.3 was born

  21. FLY 3.2 VS FLY 3.3 2M particles test • Static part: • Tree generation • Cell properties • … • Dynamic part: • Interaction list • Force computation • …

  22. FLY 3.2 VSFLY 3.3 2M particles test Total simulation time Scalability: timingnormalized on the number of processors (tnxn)/t1

  23. FLY 3.3 VS FLY MPI-2 2M particles test • Static part: • Tree generation • Cell properties • … • Dynamic part: • Interaction list • Force computation • …

  24. Conclusions Present: • Low performce MPI2 version of FLY (for now) • More scalable LAPI version of FLY Future: • TurboMPI2 • MPICH2 (porting on Linux clusters) • OO interface to hydrodynamic codes (FLASH)

More Related