1 / 33

Advanced beam simulations in AFRD

Advanced beam simulations in AFRD. Office of Science. J.-L. Vay Lawrence Berkeley National Laboratory May 2 , 2012. SciDAC-II Compass. Outline. AFRD beam simulation tools Overview Strength in innovative methods Effort toward integration Selected sample of recent applications

harper
Télécharger la présentation

Advanced beam simulations in AFRD

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced beam simulations in AFRD Office of Science J.-L. Vay Lawrence Berkeley National Laboratory May 2, 2012 SciDAC-II Compass

  2. Outline • AFRD beam simulation tools • Overview • Strength in innovative methods • Effort toward integration • Selected sample of recent applications • BELLA, NGLS, NDCX-II, E-cloud SPS, multilevel parallelism, ping-pong modes • Summary

  3. AFRD develops and supports several important physics codes PIC=Particle-In-Cell: Lagrangian macroparticles + fields on grids (finite difference solvers); interpolation between particles and fields (ES=electrostatic; EM=electromagnetic).

  4. Many users at institutions worldwide B I P W – BeamBeam3D – Impact – Posinst – Warp United States Europe/Asia ANL (B,I,P) BNL (B,I,P,W) Cornell (I,P) FNAL (B,I,P,W) ISU (I) Jlab(B,I,P) LANL (I,P) LBNL (B,I,P,W) LLNL (W) MSU (I,W) NIU (I) ORNL (I,P) SLAC (I,P,W) Stanford (I) Tech-X (P) UM (W) UMD (W) UW (I) UCLA (I) WSU (W) Yale U (B,I) CERN (P,W) Diamond (I) ESS (I) Fermi/Elettra (I) Frankfurt (I) GSI (I,W) Hiroshima U. (W) Hong Kong U. (W) IBS (I) IHEP (I,P) IMPCAS (I) KEK (I) PAL (I) PSI (I) RRCAT (I) RAL (I) SINAP (I) Technion (W)

  5. AFRD tools are applied to the design, optimization, risk minimization and support of particle accelerators • Application to • Linacs, transfer lines, rings, colliders, injectors, particle traps (e.g. anti-H) and laser plasma accelerators (LPA) • Hadron machines, lepton machines, multi-charge state beams • with impacts spanning the breadth of DOE/SC • HEP: Tevatron, Main Injector, NML photoinjector, LHC, SPS, LHC injector upgrades, ILC, Bella, CESR-TA, Project-X • NP: FRIB, e--ion colliders • BES: SNS, LCLS, NGLS • FES: Ion beam dynamics for Heavy Ion Fusion and HEDP (NDCX-II) • partially funded by SciDAC/ComPASS, collaborations with ASCR • also partially funded by LDRD and SBIR partnership in the past

  6. AFRD code developers and users C. Papadopoulos1,2 S. Paret2 G. Penn2 J. Qiang2 M. Reinsch2 S. Rykovanov4 R. Ryne2 W. Sharp3* C. Sun1 M. Terry3* J.-L. Vay4,2,3 M. Venturini2 W. Wan1 L. Yu4 28 total (21+7 guests) 1ALS (4) 2CBP (7+) 3HIF/IBT (10+) 4LOASIS (6+) *guest (LLNL) J. Barnard3* C. Benedetti4 S. Bulanov4 M. Chen4 R. Cohen3* A. Friedman3* M. Furman2 C. Geddes4 D. Grote3* E. Henestroza3 Q. Ji3 S. Lund3* H. Nishimura1 A. Persaud3

  7. Fruitful collaborations between AFRD & CRD/NERSC Proximity of CRD & NERSC is a great asset. Recent & present AFRD-CRD/NERSC collaborations • E. Wes Bethel et al (VACET): beam path analysis for LPA • O. Rubel, C. G. R. Geddes, E. Cormier-Michel, K. Wu, Prabhat , G. H. Weber, D. M. Ushizima, P. Messmer, H. Hagen, B. Hamann, E. W. Bethel, Automatic beam path analysis of laser wakefield particle acceleration data, Computational Science & Discovery, vol. 2, 015005 (2009) • ExaHDF5 team: parallel I/O, analysis, visualization • Chou, Wu, Rubel, Howison, Qiang, Prabhat, Austin, Bethel, Ryne, Shoshani, Parallel Index and Query for Large Scale Data Analysis, to appear in SuperComputing 2011. • H. Shan and X. Li: parallel performance optimization • P. Colella: LOASIS AMR modeling of capillary • A. Koniges: B. Austin (parallel optimization), B. Liu (ALE-AMR)

  8. Outline • AFRD beam simulation tools • Overview • Strength in innovative methods • Effort toward integration • Selected sample of recent applications • BELLA, NGLS, NDCX-II, E-cloud SPS, multilevel parallelism, ping-pong modes • Summary

  9. Many algorithms invented, improved or pioneered in AFRD codes

  10. and adopted by other codes • Example • Lorentz boosted frame method and associated numerical techniques have been adopted by others: • have helped Tech-X implementing moving antenna algorithm in Vorpal, • UCLA has requested help for implementation in Osiris. *Phys. Rev. Lett.98 (2007)

  11. Outline • AFRD beam simulation tools • Overview • Strength in innovative methods • Effort toward integration • Selected sample of recent applications • BELLA, NGLS, NDCX-II, E-cloud SPS, multilevel parallelism, ping-pong modes • Summary

  12. In the past, codes developed mostly separately Codes Developers BeamBeam3D J. Qiang, R. Ryne D. Grote, J-L Vay M. Furman C. Benedetti Impact CBP INF&RNO LOASIS POSINST CBP Warp Fusion

  13. Recently, Warp & Posinst were bridged and Warp expanded its reach within the division Codes Developers BeamBeam3D J. Qiang, R. Ryne D. Grote, J-L Vay M. Furman C. Benedetti Impact CBP INF&RNO LOASIS POSINST CBP Warp-POSINST Warp Fusion/IBT,CBP,LOASIS

  14. Toward further integration -- some AFRD codes ported in common repository • Together with increased modularity, this will provide opportunities for: • co-development within AFRD, • collaboration with CRD (easier & more productive on isolated modules than on full codes). 1 Developers Codes AFRD Common Repository 2 A 3 B 4 C CRD Repository CRD CRD=Computational Research Division

  15. Why we are developing codes within AFRD • No black box: 100% knowledge of numerical methods and implementation • greatly reduces uncertainties in understanding simulation results, • thus greatly enhance chances of agreement, or understanding of reason for disagreement, between simulations and experiments, • allows fast development of specialized or improved algorithms. • Control of priorities and pace of needed capabilities. • Simulation codes are increasingly becoming critical strategic assets • bigger, faster computers and improved numerical methods increase fidelity, • the embodied physical models are constantly improved. • Access to the best simulation codes allows for faster design, commissioning and understanding of experiments.

  16. Outline • AFRD beam simulation tools • Overview • Strength in innovative methods • Effort toward integration • Selected sample of recent applications • BELLA, NGLS, NDCX-II, E-cloud SPS, multilevel parallelism,ping-pong modes • Summary

  17. Wide array of applications Laser plasma acceleration HEDP/HIF driver Multi-charge state beams Injection 6 h on 2k CPUs Transport Plasma neutralization Inf&rno Warp Impact Warp Warp LEBT – Project X FRIB Coherent Synchrotron Radiation mbunching in FEL linac injectors Beam-Beam effects 6 h on 80k CPUs 5 Billions part. 5 h on 1k CPUs Warp Impact Beam-Beam3D CSR3D LHC, RHIC, Tevatron, KEK-B Traps beam dynamics in rings & linacs Electron cloud effects 6 h on 12k CPUs Montague resonance Impact Warp-Posinst Warp Courtesy H. Sugimoto PS Paul trap Alpha anti-H trap SNS SPS Posinst

  18. Laser ~10 GeV e- beam BELLA Project-- state-of-the-art PW facility for laser accelerator science Modeling from first principles challenging because of scale separation Plasma~1m z l~1mm 1D PIC simulation of 1 Bella stage demanded ~5,000 CPU-hours in 2007

  19. Simulation in a Lorentz boosted frame reduces range of scales by orders of magnitude* Lab frame Boosted frame  = 100 l=0.8 m l’=0.16 mm L=0.8 m Hendrik Lorentz L’=8 mm compaction X20,000. 0.8 m/0.8 m=1,000,000. 8 mm/0.16 mm=50. Initial applications of the method (Berkeley Lab, Tech-X and UCLA) promising but limited by: numerical instability Larger size of laser due to shorter Rayleigh length in boosted frame Longitudinal electric field laser plasma Warp 2D simulation 10 GeVLPA (ne=1017cc, =130) Lab frame Boosted frame *J.-L. Vay, Phys. Rev. Lett.98, 130405 (2007)

  20. Speedup limitations for boosting frame simulations have been overcome Novel numerical techniques and key observations allowed for efficient mitigation of numerical instability Exact Yee Cole-Karkkainen Hyperbolic rotation from Lorentz T. Lab frame EM solver with tunable numerical dispersion Instability level converts laser spatial oscillations into time beating Time step “Strided” digital filtering Special time step Wake frame J.-L. Vay, et al., J. Comput. Phys.230 (2011) Moving antenna enables compact and efficient laser launching J.-L. Vay, et al., PoP Lett.18, 030701 (2011) J.-L. Vay, et al., PoP18, 123103 (2011)

  21. Over 1 million × speedup demonstrated on a single 1 TeV stage >104 speedup for BELLA stage J.-L. Vay, et al., PoP18 (2011) e-beam Warp BF method has enabled direct simulation of 10 GeV stages with strong depletion.

  22. NGLS will deliver coherent X-rays with high repetition rate, unprecedented average brightness, and ultrafast pulses High-brightness, high rep-rate gun and injector CW superconducting linac, laser heater, bunch compressor Array of independent FELs X-ray beamlines and endstations • Beam dynamics simulations have to capture with sufficient fidelity the: • interaction of the electrons with external fields (to accelerate, transport, and compress), • self-interaction that tends to spoil the beam quality (space-charge, radiation effects, wakefields). • Spatial scales: • linac: radiation ~1mm, bunch length 0.1-1mm, machine length ~500m, • FEL: radiation ~1nm, beamlinelength ~50m.

  23. Start-to-end simulation of NGLS with real number of electrons (~2 Billions) IMPACT-T IMPACT-Z beam kinetic energy and RMS sizes evolution final current profile before undulator (A) Genesis bunch length (mm) z (m) radiation power temporal distribution at the end of the undulator averaged FEL radiation power (MW) evolution bunch length (mm) z (m) First start-to-end simulation, required ~8 hours on 2k cores NERSC Hopper computer.

  24. Boosted frame method accelerates first principle modeling of CSR effects Proof of principle 3D boosted frame full EM simulation with Warp. • Efficient modeling including full 3D dynamics, arbitrary beam shape and topology. • (future work to include conductors) W.Fawley and J.-L. Vay, Proc IPAC10 (2010)

  25. The Heavy Ion Inertial Fusion (HIF) program is studying the science of ion-heated matter, as well as drivers & targets for inertial fusion energy Space/time scales span 8 orders of magnitude: from <mm to km>/<ps to 100 ms> Deuterium+Tritium Simulation goal – integrated self-consistent predictive capability Artist view of a Heavy Ion Fusion power plant • NDCX-II is our new platform for studies of • space-charge-dominated beams • Warm Dense Matter physics • beam-target energy coupling including: • beam(s) generation, acceleration, focusing and compression along accelerator, • loss of particles at walls, interaction with desorbed gas and electrons, • neutralization from plasma in chamber, • target physics and diagnostics. from source… …to target => Need large-scale multiphysics computing

  26. 3D & RZ Warp simulations used to design NDCX-II Aligned solenoids Generation Acceleration Compression Injection of neutralizing plasma Misaligned solenoids (random offsets) Plasma neutralization Versatility of Warp code allows for integrated beam and plasma simulations, combining all the necessary physics self-consistently. A. Friedman, et al, Phys. Plasmas17, 056704  (2010)

  27. Simulation of e-cloud driven instability and its attenuation using a feedback system in the CERN SPS Transverse instability observed in SPS beams due to electron clouds We use the Particle-In-Cell framework Warp-Posinst to investigate dynamics of instability as well as feasibility and requirements of feedback system gas bunch 2 bunch 1 e- e- Pipe Warp’s mesh refinement & parallelism provide efficiency Posinst provides advanced secondary electrons model Beam ions Electrons true sec. back-scattered elastic Monte-Carlo generation of electrons with energy and angular dependence. re-diffused Spurious image charges from irregular meshing controlled via guard cells

  28. Warp and Posinst have been further integrated, enabling fully self-consistent simulation of e-cloud effects: build-up & beam dynamics Turn 1 CERN SPS at injection (26 GeV) Turn 500

  29. Warp-Posinst enabled the first direct simulation of a train of 72 bunches -- using 2880 CPUs on Franklin (NERSC) Average electron cloud density history at fixed station Unexpected substantial density rise for bunch ≥25 between turn 400 and 600. Bunch 25 E-cloud density rise associated with emittance and beam radius growth => positive coupling between bunches evolution and electron generation. J.-L. Vay, et al, Ecloud10 Proc., (2010)

  30. Comparison with experimental measurements -- collaboration with SLAC/CERN Warp-Posinst2 Experiment1 Good qualitative agreement: separation between core and tail with similar tune shift. Warp is also applied to study of feedback control system (R. Secondo in collaboration with SLAC) Bunch 29, Turn 100-200 Bunch 119, Turn 100-200 Fractional tune Fractional tune head tail head tail Nominal fractional tune=0.185 Bunch slice Bunch slice 1J. Fox, et al, IPAC10 Proc., p. 2806 (2011) 2J.-L. Vay, et al, Ecloud10 Proc., (2010)

  31. Multilevel parallelism based on MPI groups is used for parameter scans and optimization (Ryne, 2009) • Sensitivity to solenoid offset & voltage jitter in NDCX-II (Warp) • ensemble of 256 cases • ~4.5 hours on 6,144 CPUs • (simulations by D. Grote) • Optimization LHC luminosity (BeamBeam3D) • ensemble of 100 populations • ~3 hours on 12,800 CPUs solenoid alignment voltage jitter A. Friedman, et al, Phys. Plasmas17, 056704  (2010) J. Qiang, et al, Proc. PAC 11, p. 1770, (2011) Multilevel parallelism enables very efficient parameter scans and optimization.

  32. AFRD codes used to discover new physics -- Warp simulations of multipactor predicted new “ping-pong” modes* WARP 3D simulation of rectangular waveguide Red: primaries Blue: secondaries Schematic of particle orbits in a period-2 ping-pong multipactor. Cutoff from Ping Pong Theory Warp Normal cutoff *R.A. Kishek, Phys. Rev. Lett. 108, 035003 (2012). Modes lead to broadening of area of parameter space where multipactor can occur. Excellent agreement with WARP “The nice thing is WARP predicted it first, and then resulted in good agreement once I worked out the details of the theory.” – R. Kishek, U. Maryland

  33. Summary • AFRD develops and maintains cutting-edge accelerator codes • main codes have a worldwide user base • Major impact on DOE/SC (HEP,NP,BES,FES) programs • design, optimize and support accelerators • discover new physics • AFRD algorithms pushing limits of state-of-the-art • several have spread to other majors codes outside the lab • Development of in-house codes provides an edge to AFRD • consolidation of efforts underway within division • Applications of the codes are at the forefront in several important areas of accelerator physics

More Related