1 / 24

Thom Dunning, Bill Kramer, Marc Snir , Bill Gropp and Wen-mei Hwu

Thom Dunning, Bill Kramer, Marc Snir , Bill Gropp and Wen-mei Hwu. CI DAYS 2010 CYBERINFRASTRUCTURE AT PURDUE. Cristina Beldica, PhD, MBA Blue Waters Senior Project Manager National Center for Supercomputing Applications University of Illinois at Urbana-Champaign.

jeanne
Télécharger la présentation

Thom Dunning, Bill Kramer, Marc Snir , Bill Gropp and Wen-mei Hwu

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Thom Dunning, Bill Kramer, Marc Snir, Bill Gropp and Wen-meiHwu CI DAYS 2010 CYBERINFRASTRUCTURE AT PURDUE Cristina Beldica, PhD, MBA Blue Waters Senior Project Manager National Center for Supercomputing ApplicationsUniversity of Illinois at Urbana-Champaign

  2. NSF’s Strategy for High-end Computing • Three Resource Levels • Track 3: University owned and operated • Track 2: Several NSF-funded supercomputer & specialized computing centers (TeraGrid) • Track 1: NSF-funded leading-edge computer center • Computing Resources • Track 3: 10s–100 TF • Track 2: 500–1,000 TF • Track 1: see following slide CI Days 2010 9 December 2010 • Purdue University

  3. BackgroundNSF Track 1 Solicitation “The petascale HPC environment will enable investigations of computationally challenging problems that require computing systems capable of delivering sustained performance approaching 1015 floating point operations per second (petaflops) on real applications, that consume large amounts of memory, and/or that work with very large data sets.” Leadership-Class System Acquisition - Creating a Petascale Computing Environment for Science and Engineering NSF 06-573 CI Days 2010 9 December 2010 • Purdue University

  4. Requested Attributes of Petascale System • Maximum Core Performance … to minimize number of cores needed for a given performance level, lessen impact of sections of code with limited scalability • Low Latency, High Bandwidth Interconnect … to enable science and engineering applications to scale to tens to hundreds of thousands of cores • Large, Fast Memories … to solve the most memory-intensive problems • Large, Fast I/O System and Data Archive … to solve the most data-intensive problems • Reliable Operation … to enable the solution of Grand Challenge problems CI Days 2010 9 December 2010 • Purdue University

  5. Heart of Blue Waters: Two New Chips IBM Power7 Chip Up to 256 GF peak performance 3.5–4.0 GHz Up to 8 cores, 4 threads/core Caches L1 (2x64 KB), L2 (256 KB), L3 (32 MB) Memory Subsystem Two memory controllers 128 GB/s memory bandwidth IBM Hub Chip 1.128 TB/s total bandwidth Connections: 192 GB/s QCM connection 336 GB/s to 7 to other local nodes 240 GB/s to local-remote nodes 320 GB/s to remote nodes 40 GB/s general purpose I/O CI Days 2010 9 December 2010 • Purdue University

  6. Building Blue Waters Blue Waters will be the most powerful computer in the world for scientific research when it comes on line in 2011. Blue Waters ~10 PF Peak ~1 PF sustained >300,000 cores >1 PB of memory >25 PB of disk storage 500 PB of archival storage ≥100 Gbps connectivity Blue Waters 3-Rack Building Block 32 IH server drawers 256 TF (peak) 32 TB memory 128 TB/s memorybw 3 Storage systems (>500 TB) 10 Tape drive connections IH Server Drawer 8 QCM’s (256 cores) 8 TF (peak) 1 TB memory 4 TB/s memory bw 8 Hub chips 9 TB/scommbw Power supplies PCIe slots Fully water cooled Quad-chip Module 4 Power7 chips 1 TF (peak) 128 GB memory 512 GB/s memory bw Hub Chip 1.128 TB/s commbw Blue Waters is built from components that can also be used to build systems with a wide range of capabilities—from deskside to beyond Blue Waters. Power7 Chip 8 cores, 32 threads L1, L2, L3 cache (32 MB) Up to 256 GF (peak) 128 Gb/s memory bw 45 nm technology CI Days 2010 9 December 2010 • Purdue University

  7. ORNL NCSA System Attribute Jaguar (#2) Blue Waters Vendor (Model) Cray (XT5) IBM (PERCS) Processor AMD OpteronIBM Power7 Peak Performance (PF) 2.3 ≳10 Sustained Performance (PF) ? ≳1 Number of Cores/Chip 6 8 Number of Processor Cores 224,256 ≳300,000 Amount of Memory (TB) 299 ≳1,200 Memory Bandwidth (PB/sec) 0.478 ≳5 Amount of On-line Disk Storage (PB) 10 ≳25 Sustained Disk Transfer (TB/sec) 0.24 ≳1.5 ~4x 1⅓x <1½x 4x 10x >2½x >6x (sust’d) CI Days 2010 9 December 2010 • Purdue University

  8. IO Model: Global, Parallel shared file system (>10 PB) and archival storage (GPFS/HPSS) MPI I/O Environment: Traditional (command line), Eclipse IDE (application development, debugging, performance tuning, job and workflow management) Languages: C/C++, Fortran (77-2008 including CAF), UPC Performance tuning: HPC and HPCS toolkits, open source tools Resource manager: Batch and interactive access Parallel debugging at full scale Full – featured OS(RHEL 6 Linux), Sockets, threads, shared memory, checkpoint/restart Libraries: MASS, ESSL, PESSL, PETSc, visualization… Programming Models: MPI/MP2, OpenMP, PGAS, Charm++, Cactus, OpenSHMEM Low-level communications API supporting active messages (LAPI++) Hardware Multicore POWER7 processor with Simultaneous MultiThreading (SMT) and Vector MultiMedia Extensions (VSX) Private L1, L2 cache per core, shared L3 cache per chip High-Performance, low-latency interconnect supporting RDMA CI Days 2010 9 December 2010 • Purdue University

  9. Blue Waters Storage System • > 18PB of on-line disk storage distributed throughout • Peak aggregate I/O rate > 1.5TB/s • Racks with disk enclosures + nodes acting as I/O servers distributed throughout system • Redundant servers/paths • Advanced RAID & GPFS for high availability & reliability • Large near-line tape archive • Eventually scaling to 500 PB • Seamlessly integrated into the GPFS namespace • Runs HPSS and utilizes GPFS-HPSS Interface (GHI) • Allows for transparent migration of data between disk & tape CI Days 2010 9 December 2010 • Purdue University

  10. Blue Waters External Networking Our goal is that Blue Waters will never be the bottleneck in a data transfer. Up to 44 10Gbs Ethernet interfaces initially configured on Blue Waters that will provide connectivity within NCSA and externally. On day one of production there will be 100Gbs of offsite bandwidth to POPs in Chicago.* Option to upgrade bandwidth in later years based on need and national backbone availability (100Gbs link possible). * = subject to final full-service phase funding level CI Days 2010 9 December 2010 • Purdue University

  11. National PetascaleComputing Facility Partners EYP MCF/ Gensler IBM Yahoo! • Energy Efficiency • LEED certified Gold • PUE = 1.1–1.2 • Modern Data Center • 90,000+ ft2 total • 30,000 ft2 raised floor • 20,000 ft2 machine room gallery CI Days 2010 9 December 2010 • Purdue University

  12. Virtual School 2010 • Three Summer Schools during July-August • Petascale Programming Environment and Tools • Big Data in Science • Proven Algorithmic Techniques for Many-core Processors • Each at 10 sites using HD video - total of 21 sites • Pre-req on-line courses – Intro to MPI, OpenMP and CUDA • 725 students submitted 1,077 registrations for the sessions • Over 200 registered for each Summer School 2010 • 2008 – 42 participants/1 site; 2009 – 230 participants/6 sites CI Days 2010 9 December 2010 • Purdue University

  13. 2010 VSCSE Summer Schools and Host Institutions

  14. Great Lakes Consortium for Petascale Computation Goal: Facilitate the widespread and effective use of petascale computing to address frontier research questions in science, technology and engineering at research, educational and industrial organizations across the region and nation. Charter Members Argonne National Laboratory Fermi National Accelerator Laboratory Illinois Math and Science Academy Illinois Wesleyan University Indiana University* Iowa State University Illinois Mathematics and Science Academy Krell Institute, Inc. Louisiana State University Michigan State University* Northwestern University* Parkland Community College Pennsylvania State University* Purdue University* The Ohio State University* Shiloh Community Unit School District #1 Shodor Education Foundation, Inc. SURA – 60 plus universities University of Chicago* University of Illinois at Chicago* University of Illinois at Urbana-Champaign* University of Iowa* University of Michigan* University of Minnesota* University of North Carolina–Chapel Hill University of Wisconsin–Madison* Wayne City High School * CIC universities* CI Days 2010 9 December 2010 • Purdue University

  15. Sustained Petascalecomputing will enable advances in a broad range of science and engineering disciplines: Weather & Climate Forecasting Molecular Science Astrophysics Health Life Science Materials Astronomy Earth Science CI Days 2010 9 December 2010 • Purdue University

  16. Petascale Computing Resources Allocations • PRAC Solicitation • http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503224 • 2008-2009 awards • Number of PRAC Projects Awarded as of 3/3/2010: 18 • Total Number of Researchers: 69 • GLCPC Researchers: 28 • 2010 awards • In progress, first announced in Nov 2010 • Next round • Proposals due: March 17, 2011 CI Days 2010 9 December 2010 • Purdue University

  17. Geographic Distribution of PRAC Awardees CI Days 2010 9 December 2010 • Purdue University

  18. Getting Involved with Blue Waters • (*) Non-disclosure agreements (NDAs) currently required to access non-public information about Blue Waters • Apply for a PRAC award • Main avenue for getting a BW allocation • Receive support from the BW AUS team • Participate in BW PRAC meetings • Access to and Assistance with Blue Waters Hardware (*) • Access to and Assistance with Blue Waters Software (*) • Blue Waters-specific Training • on-line documentation, webinars/tutorials/on-line courses (~8 per year), workshops (~2 per year) • Tips for writing a successful PRAC proposal http://www.ncsa.illinois.edu/BlueWaters/pdfs/webinar_Prospective_PRAC.pdf CI Days 2010 9 December 2010 • Purdue University

  19. Getting Involved with Blue Waters (cont.) • Get involved with the GLCPC http://www.greatlakesconsortium.org/ • GLCPC allocation on BW • Education and outreach activities Contacts: • GLCPC President: Maxine Brown, Associate Director, Electronic Visualization Lab, University of Illinois at Chicago,maxine@uic.edu • GLCPC Board Member: W. Gerry McCartney, Vice President for Information Technology, Purdue University, mccart@purdue.edu • Purdue Institutional Representative: John Campbell, Associate Vice President for Information Technology, Purdue Universityjohn-campbell@purdue.edu CI Days 2010 9 December 2010 • Purdue University

  20. Getting Involved with Blue Waters (cont.) • Participate in BW educational activities • Professional development workshops for undergraduate faculty • Undergraduate materials development assistance • Undergraduate student internships • Virtual School of Computational Science and Engineering http://www.vscse.org/ • Contact: • Scott Lathrop, Technical Program Manager for Education and Outreach, NCSA and Shodor Foundation, scott@ncsa.illinois.edu CI Days 2010 9 December 2010 • Purdue University

  21. For more information, see: • Blue Waters Website • http://www.ncsa.uiuc.edu/BlueWaters • IBM Power Systems Website • http://www-03.ibm.com/systems/power/ (Power7 information coming soon) • PRAC Solicitation • http://www.nsf.gov/pubs/2008/nsf08529/nsf08529.htm CI Days 2010 9 December 2010 • Purdue University

  22. Questions? Dr. Brett Bode NCSA/University of Illinois Blue Waters Software Development Manager bbode@ncsa.uiuc.edu/ - http://www.ncsa.uiuc.edu/BlueWaters (217) 244-5187 CI Days 2010 9 December 2010 • Purdue University

  23. Key Points for Programmers • 32 core OSI (node) • 128 GB of memory (almost flat access) • Can use SMT to run 32 to 128 threads. • Fast, large caches 32 KB L1, 256KB L2 and 4MB L3 • Dual, double precision SIMD units (VSX) • Large, very-fast global file system CI Days 2010 9 December 2010 • Purdue University

  24. Key Points for Programmers • Very fast and scalable interconnect • 24 GB/s to each node in the same drawer • 5GB/s to each node in same supernode • 10GB/s to each remote supernode (within supernode) • Maximum of 3 hops (direct) or 5 hops between any pair of nodes in the system. • Main bottleneck may be the 50GB/s (unidirectional) bandwidth between the QCM and it’s hub chip. • Topology mapping is unclear, may not be needed CI Days 2010 9 December 2010 • Purdue University

More Related