1 / 14

State of the Labs: NERSC Update

State of the Labs: NERSC Update. Juan Meza Lawrence Berkeley National Laboratory. NERSC Center Overview. Funded by DOE, annual budget $28M, about 65 staff Supports open, unclassified, basic research Located in the hills next to University of California, Berkeley campus

sarah
Télécharger la présentation

State of the Labs: NERSC Update

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. State of the Labs:NERSC Update Juan Meza Lawrence Berkeley National Laboratory

  2. NERSC Center Overview • Funded by DOE, annual budget $28M, about 65 staff • Supports open, unclassified, basic research • Located in the hills next to University of California, Berkeley campus • Close collaborations between university and NERSC in computer science and computational science • Close collaboration with about 125 scientists in the Computational Research Division at LBNL

  3. HPSS 12 IBM SP servers 15 TB of cache disk, 8 STK robots, 44,000 tape slots, 20 200 GB drives, 60 20 GB drives,max capacity 5-8 PB Visualization Server – “escher” SGI Onyx 3400 – 12 Processors/ 2 Infinite Reality 4 graphics pipes 24 Gigabyte Memory/4Terabytes Disk ETHERNET 10/100 Megabit HPSS HPSS STK Robots SGI SYMBOLIC MANIPULATION SERVER FC Disk Gigabit EthernetJumbo Gigabit Ethernet OC 48 – 2400 Mbps ESnet Testbeds and servers PDSF400 processors (Peak 375 GFlop/s)/ 360 GB of Memory/ 35 TB of Disk/Gigabit and Fast EthernetRatio = (1,93) IBM SP NERSC-3 – “Seaborg” 6,656 Processors (Peak 10 TFlop/s)/ 7.8 Terabyte Memory/44Terabytes of Disk Ratio = (8,7) LBNL “Alvarez” Cluster 174 processors (Peak 150 GFlop/s)/87 GB of Memory/1.5 terabytes of Disk/ Myrinet 2000 Ratio - (.6,100) Ratio = (RAM Bytes per Flop, Disk Bytes per Flop) NERSC System Architecture

  4. 2003 Accomplishments • High End Systems • NERSC 3 (“Seaborg”) 10 Tflop/s system in full production • Increased HPSS storage capacity to > 8 Pbytes • Evaluation of alternative architectures (SX-6, X-1, ES, BG/L) • Initiated procurement of NCS (New Computational System) • Comprehensive Scientific Support • Reached >95% utilization on Seaborg • Excellent results in User Survey • Intensive Support for Scientific Challenge Teams • INCITE allocations and SciDAC projects • Unified Science Environment • All NERSC systems on the grid (2/2004)

  5. Immediate High Utilization of “Seaborg” 90%

  6. NERSC FY 03 Usage by Institution Type

  7. FY03 Leading DOE laboratory usage(>500,000 processor hours)

  8. FY 03 Usage by Scientific Discipline

  9. Terascale Simulations of Supernovae • PI: Tony Mezzacappa, ORNL • Allocation Category: SciDAC • Code: neutrino scattering on lattices (OAK3D) • Kernel: complex linear equations • Performance: 537 Mflop/s per processor (35% of peak) • Scalability: 1.1 Tflop/s on 2,048 processors • Allocation: 565,000 MPP hours; requested and needs 1.52 million

  10. Simulation Matches Gamma Ray Burst • SciDAC Project by Stan Woosley et al., UC Santa Cruz • In March 2003 HETE satellite observed unusually close and bright GRB • “Rosetta stone” of GRBs, because it conclusively established that at least some long GRBs come from supernovas • By 1993, 135 different theories on the origin of GRBs had been published in scientific journals • NERSC simulations show that “collapsar” model best describes data [1] J. Hjorth, J. Sollerman, P. Møller, J. P. U. Fynbo, S. E. Woosley, et al., “A very energetic supernova associated with the -ray burst of 29 March 2003,” Nature 423, 847 (2003).

  11. ProteinShop: Computational Steering of Protein Folding • Teresa Head-Gordon et al. , UC Berkeley (optimization and protein folding), and Silvia Crivelli, LBNL et al. (visualization) • ProteinShop incorporates inverse kinematics from robotics or video gaming to permit biologist to manipulate protein interactively • Optimization finds local energy minimum on Seaborg • Permits much larger search space, and integration of intuitive knowledge • Best paper award at IEEE Visualization Conference, and “most innovative” at CASP • submitted for R&D 100 award

  12. INCITE • INCITE - Innovative and Novel Computational Impact on Theory and Experiment - devotes 10% (4.9M hours) of NERSC resources to the most significant science regardless of DOE affiliation • Proposal Demographics • 52 proposals received • 130,508,660 CPU hours requested (1 proposal asked for 71,761,920 hours – the rest were well justified – less than 5M hours) • An oversubscription of 13 to 29 times.

  13. FY04 INCITE Awards Innovative and Novel Computational Impact on Theory and Experiment (INCITE) • Quantum Monte Carlo Study of Photosynthetic Centers; William Lester, Berkeley Lab • Stellar Explosions in Three Dimensions; Tomasz Plewa, University of Chicago • Fluid Turbulence; P.K. Yeung, Georgia Institute of Technology

  14. Thank you

More Related