1 / 26

Fusion Status Report

Fusion Status Report. Francisco Castej ón (francisco.castejon@ciemat.es) CIEMAT. Madrid, Spain. Outline. Strategy. Fusion Deployment and VO setup. The problem of the name Present Applications: Computing in Plasma Physics. Future Applications in the grid. Data storage and handling.

torie
Télécharger la présentation

Fusion Status Report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fusion Status Report Francisco Castejón (francisco.castejon@ciemat.es) CIEMAT. Madrid, Spain.

  2. Outline • Strategy. • Fusion Deployment and VO setup. • The problem of the name • Present Applications: Computing in Plasma Physics. • Future Applications in the grid. • Data storage and handling. • Conclusions.

  3. Strategy • Computing: • Identify common Codes suitable for GRID. (Ongoing) • Adapt codes to the GRID. (Ongoing) • Set up VO (Ongoing) • Production phase. • Data handling: • Define strategies for data storage. • & database organization. • Protocol for data Access.

  4. ITER: Making decisions in real Time !! Data Analysis and Reduction: Artificial Intelligence, Neural Network, Pattern Recognition Data Acquisition and Storage (Grid, Supercomputers) Simulations: Large codes in different platforms (Grid, Supercomputers) Decision for present/next shot One half an hour shot every one hour and a half: Decisions in real time.

  5. ITERPartners Distributed Participation. Data access. Remote Control Rooms?

  6. International Tokamak (ITPA) and Stellarator (SIA) collaborations. Russia: T-10 (Kurchatov) Globus (Ioffe) T-11M (TRINITI) L-2 (Gen. Inst. Phys.) EGEE Project Japan: JT-60 (Naka) LHD (Toki) CHS (Nagoya) H-J (Kyoto) GRID Project ? EU: JET (EFDA) ASDEX (Ger.) TORE SUPRA (Fran.) MAST (UK) TEXTOR (Ger.) TCV (Switz.) FTU (Italy) W7-X (Ger.) TJ-II (Spain) EGEE Project USA: Alcator C-Mod (MIT) DIII-D (San Diego) NSTX (Princeton) NCSX (Princeton) HSX (Wisconsin) QPS (Oak-Ridge) USA Fusion Grid China, Brazil, Korea, India: KSTAR (Korea) TCBRA (Bra.) H-7 (China) U2A (China) SST1 (India) EGEE Project

  7. PARTNERS and Resources for VO • SW Federation: CIEMAT, BIFI, UCM, INTA (Spain) • Kurchatov (Russia). • Culham Laboratory- UKAEA (UK) • KISTI (South Korea). • ENEA (Italy). • CEA-Cadarache (France). • … • Experience in using and developing Fusion Applications. • Experience in porting applications and developing Grid Technologies. • Connection with EELA (Some fusion partners: Brazil,Mexico,Argentina) • Needed: Join IPP-Max Planck (Germany) and other EFDA Associations. • Also needed: contact with USA, China, Japan,…

  8. VO Deployment Resource Broker in BIFI (Spain) VO Manager: I. Campos (BIFI. Spain) http://grid.bifi.unizar.es/egee/fusion-vo/ http://www-fusion.ciemat.es/collaboration/egee/ • Present: CIEMAT: 27 KSpecInts; BIFI: 8 KSpecInts; INTA: 6 KSpecInts • Within less than 6 months: JET: 38 KSpecInts; BIFI: 32 KSpecInts; CEA-Cadarache ? KISTI?, INTA?, ENEA? • Beginning of 2007: JET: 32 additional cores; BIFI: 32 additional cores; CIEMAT ?; CEA-Cadarache ?(second phase already committed).

  9. VO Deployment: The problem of the name. • Russian Grid has adopted the same name of Fusion, as we have done. • The works that are sent by our resource broker go to such Grid. • Our VO deployment is hindered. • They should change the name in short term (~1 week). A suitable name: Fusion-RIDG • Otherwise: We have to change the name. Consequences on Russian certificates.

  10. COMPUTING in the GRID: Present Applications • Applications with distributed calculations: Monte Carlo, Separate estimates, … • Multiple Ray Tracing: e. g. TRUBA. • Stellarator Optimization: VMEC • Transport and Kinetic Theory: Monte Carlo Codes.

  11. Multiple Ray Tracing: TRUBA Beam Simulation: Bunch of rays with beam waist far from the critical layer (100-200 rays) Bunch of rays with beam waist close to the critical layer (100-200 rays) x (100-200 wave numbers) ~105 GRID PROBLEM Single Ray (1 PE): Hamiltonian Ray Tracing Equations.

  12. TRUBA: Multiple Ray Tracing • TRUBA for EBW: • Real geometry in TJ-II:Coming from a supercomputer (VMEC). • A single Non-relativistic ray (about 18’). • A single relativistic ray (about 40’). • Some problems with Geometry libraries. • Ported to the grid using Grid Way (for the moment). • See: • J. L. Vázquez-Poleti. “Massive Ray Tracing in Fusion Plasmas on EGEE”. User Forum, 2006.

  13. Optimised Stellarators QPS and NCSX Supercomputer Optimization QPS NCSX

  14. Stellaratoroptimization in the Grid Plasma configuration may be optimised numerically by variation of the field parameters. • A lot of different Magnetic Configurations operating nowadays. OPTIMIZATION NECESITY BASED ON KNOWLEDGE OF STELLARATOR PHYSICS.Every variant computed on a separate processor (~10’)VMEC (Variational Momentum Equilibrium Code)120 Fourier parameters are varied.

  15. VMEC on Kurchatov GRID • LCG-2 - based Russian Data Intensive Grid consortium resources. • About 7.500 cases computed (about 1.500 was not VMEC-computable, i.e. no equilibrium). • Each case took about 20 minutes. • Up to 70 simultaneous jobs running on the grid. • Genetic Algorith used to select the optimum case. • See: V. Voznesensky. “Genetic Optimisations in Grid”. User Forum, 2006.

  16. KineticTransport • Following independent particle orbits • Montecarlo techniques: Particles distributed according to experimental density and ion temperature profiles (Maxwellian distribution function) • SUITABLE PROBLEM FOR CLUSTER AND GRID TECHNOLOGIES

  17. KineticTransport Example of orbit in the real 3D TJ-II Geometry (single PE). ~1 GBy data, 24 h x 512 PE Distribution function of parallel velocity at a given position (Data Analysis).

  18. Kinetic transport No collisions: 0.5 ms of trajectory takes 1 sec. CPU.. Collisions: 1 ms of trajectory takes 4 sec CPU. Particle life: 150 - 200 ms. Single particle ~ 10 min. Necessary statistics for TJ-II 107 particles.

  19. COMPUTING in the GRID: Future applications • EDGE2D Application for tokamaks • Transport Analysis of multiple shots (typically 104 shots) or Predictive Transport with multiple models: e. g. ASTRA. CIEMAT(Spa) + IPP(Ger) + Kurchatov(Rus) + EFDA(UE) + … • Neutral Particle Dynamics: EIRENE: CIEMAT(Spa) + IPP(Ger)

  20. JET – Flagship of Worldwide Fusion: EDGE2D Equilibrium code.

  21. EDGE2D: Determine plasma shape from Measurements: Plasma current, Pressure, Magnetic field… Cross section of present EU D-shaped tokamaks compared to the ITER project • EDGE2D code solves the 2 D fluid equations for the conservation of energy, momentum and particles in the plasma edge region. • Ions, electrons and all ionisation stages of multiple species are considered. • Interaction with the vessel walls is simulated by coupling to monte-carlo codes, to provide the neutral ion and impurity sources.

  22. Massive Transport Calculations For Instance: Enhanced heat Confinement in TJ-II. Lower heat diffusivity for low electron density and high absorbed power density. A different case on every PE.

  23. EIRENECode Trayectory of a He atom in TJ-II. Vertical and horizontal proyections. It starts in the green point and is absorbed in the plasma by an ionization process. The real 3D geometry of TJ-II vacuum chamber is considerd.

  24. DATA HANDLING • Storage: • Large data flux: 104 sensors x 20-50 kHz sampling= • 1-10 GBy per second raw data • x 0.5 h= 3 TBy per shot in ITER every 1,5 h • Data Access & Sharing in Large Cooperative Experiments: • Strategy to be defined: • -Database with distributed access or distributed storage? • -The evolution of technologies until ITER works. • If distributed storage: We need a standard representation for experimental data in LCG-2/gLite CE middleware. • Storage should allow to do some basic processing: neural network, clustering…

  25. DAS Tools: Visualization, DAQ and processing To add grid-aware protocols for: • Data navigation and mining • Data exchange • Data search • Event catch

  26. Conclusions • VO of Fusion grid is almost ready (the problem of the name). • Effort to get more partners in and outside EFDA is being done. • Several applications are running in the grid. • Future applications for the grid are identified. • A deep discussion and investigation on large amount of data handling is needed. • Cook book for data handling in the grid is desirable.

More Related