1 / 9

Plasma Modelling and Simulations – needs for distributed data processing

Plasma Modelling and Simulations – needs for distributed data processing. Association Euratom–Tekes. J.A. Heikkinen, VTT Processes. Plasma modeling and analysis group (25 persons) at HUT and VTT supporting Euratom fusion research Analysis tools for plasma simulation:

ledell
Télécharger la présentation

Plasma Modelling and Simulations – needs for distributed data processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Plasma Modelling and Simulations – needs for distributed data processing AssociationEuratom–Tekes J.A. Heikkinen, VTT Processes Plasma modeling and analysis group (25 persons) at HUT and VTT supporting Euratom fusion research Analysis tools for plasma simulation: - ASCOT particle tracing code (particle detectors, NPA) - ELMFIRE for fluctuation analysis (Doppler reflectometry) Massively parallelized domain decomposed computing required - is distributed (grid) computation possible? - does DEISA provide appropriate platforms?

  2. AssociationEuratom–Tekes MONTE CARLO SIMULATION OF PLASMA PARTICLE DISTRIBUTIONS: THE ASCOT CODE ASCOT (Accelerated Simulation of Charged Particle Orbits in a Tokamak) is a guiding-centre orbit following Monte Carlo code for toroidal plasma devices. • Real tokamak background data are imported into ASCOT code (ASDEX Upgrade, JET, DIII-D) • Ensembles of ~500000 test ions are followed in the edge plasma • Guiding-centre motion is tracked in 5D phase space also in SOL • Divertor and detector target hits are recorded (3D velocity) • Edge plasma effects are calculated consistently • Particle collisions are modelled with Fokker-Planck equivalent Monte Carlo operators (assumed Maxwellian background) or by a binary collision model. Edge plasma orbit topologies (JET #50401) leading to divertor targets.

  3. ASDEX Upgrade NPA Simulations S. Sipilä, T. Kurki-Suonio, J.A. Heikkinen – Association Euratom-Tekes, HUT & VTT H.-U. Fahrbach, A.G. Peeters – Association Euratom-IPP, IPP Garching A Neutral Particle Analyzer (NPA) simulation model was developed for the ASCOT MC orbit-following code. Horizontal and vertical sightline adjustment as well as multiple energy channels and a realistic viewing aperture are modelled. NBI ion energy tail slope indicates background Ti NBI/NPA simulations established that the use of NPA signal for determining central Ti is feasible but sensitive to NPA viewing angle. NBI tail distributions Simulated NPA signal

  4. AssociationEuratom–Tekes GLOBAL GYROKINETIC SIMULATION OF PLASMA TURBULENCE: THE ELMFIRE CODE J.A. Heikkinen, S. Henriksson, S. Janhunen, T.P. Kiviniemi, and F. Ogando • Full f nonlinear gyrokinetic particle-in-cell approach for • global plasma simulation is adopted • Gyrokinetics is based on Krylov-Boholiubov averaging method • in description of FLR effects (P. Sosenko, 2001) • Direct implicit solution of perturbations by ion polarization • drift and electron parallel • acceleration is applied • Transport simulations with strong • variations of particle distribution • are possible.

  5. ELMFIRE is a global 3-D gyrokinetic particle-in-cell code • developed at TKK and VTT during 2000-2005 • runs at IBM eServer Cluster 1600 supported by CSC • benchmarked against other gyrokinetic codes (mode growth, turbulence • saturation level, neoclassical behaviour) • ELMFIRE has a number of advanced features • - Hamiltonian guiding-center equations of motion • - electron acceleration along B treated implicitely • - direct implicit ion polarization (DIP) sampling of coefficients in the gyrokinetic equation • quasi-ballooning coordinates to solve the gyrokinetic Poisson equation • - binary collision model and option for multiple ion species • - versatile heat and particle sources • - model for plasma recycling with wall boundaries • extensive diagnostics package with videos for 3-D illustration of transport • Domain decomposition to save computer memory

  6. Numerical performance 200-1000 grid cells in poloidal (quasiballooning) direction 30-100 grid cells in radial direction 4-64 grid cells along B Simulation time step 10-50 ns 5-240 millions of electrons and ions (different species) Massively parallelized (MPI): Typical CPU consumption about 96 hours for 200 s with 32 processors sharing 80 millions of particles and using simulation time step of 35 ns with 25600 grid cells.

  7. Linear mode growth rate and frequency well in accordance with the ”Cyclone base” test case for adiabatic electrons. With kinetic electrons we find about 2 times higher growth rates and quasi- linear saturation level in accordance with Y. Chen et al., Nucl. Fusion 43 (2003) 1121. Adiabatic electrons Reproduced from A.M. Dimits et al., Phys. Plasmas 7 (2000) 969. ELMFIRE results added to the figure Nonlinear saturation of i from ELMFIRE to ~2.2 i2vT/Ln

  8. Doppler reflectometry at FT-2 tokamak S. Henriksson, J.A. Heikkinen, S. Janhunen, T. Kiviniemi – Association Euratom-Tekes, HUT & VTT V. Bulanin, Ioffe Institute Elmfire simulations of fluctuations combined with the weighting function from microwave scattering off the fluctuations. Provides information on transport coefficients and rotation of the plasma The numerically obtained detector signal successfully benchmarked with the experimental signal at FT-2 tokamak

  9. Massive parallelization (128 – 1024 processors) compulsory for • ASCOT/ELMFIRE simulations • - In addition, domain decomposition fundamental for ELMFIRE • Distributed (grid) computing straightforward for ASCOT calculations • Is distributed (grid) computing possible for ELMFIRE calculations? • - with 32 processors each time step takes some minutes of CPU • - processor data (½ processor memory) to be collected after each • time step (takes some 10 s within a node) • -DEISA network infrastructure • - CSC is a partner

More Related