1 / 36

The Alliance Distributed Supercomputing Facilities

The Alliance Distributed Supercomputing Facilities. Opening Talk to the Alliance User Advisory Council Held at Supercomputing ‘98 in Orlanda, Florida, December 5,1998. The National PACI Program - Partners and Supercomputer Users. 850 Projects in 280 Universities 60 Partner Universities.

sanam
Télécharger la présentation

The Alliance Distributed Supercomputing Facilities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Alliance Distributed Supercomputing Facilities • Opening Talk to the Alliance User Advisory Council • Held at Supercomputing ‘98 in Orlanda, Florida, December 5,1998

  2. The National PACI Program -Partners and Supercomputer Users 850 Projects in 280 Universities 60 Partner Universities

  3. PACI - The NSF Partnerships for Advanced Computational Infrastructure • The Two Partnerships (NPACI & Alliance) Each Have: • Leading-Edge Site • The Site With the Very Large Scale Computing Systems • Mid-Level Resource Sites • Partners With Alternative or Experimental Computer Architectures, Data Stores, Visualization Capabilities, Etc. • Applications Technologies • Computational Science Partners Involved in Development, Testing and Evaluation of Infrastructure • Enabling Technologies • Computer Science Partners, Developing Tools and Software Infrastructure Driven by Application Partners • Education, Outreach, and Training Partners • Network Infrastructure Is Critical. www.ncsa.uiuc.edu www.npaci.edu

  4. NCSA is Combining Shared Memory Programming with Massive Parallelism Doubling Every Nine Months! SN1 Origin Power Challenge Challenge

  5. NCSA Users by System 600 SGI Power Challenge Array CM5 Convex C3880 500 Convex Exemplar Cray Y-MP Origin 400 PCA SPP-2000 Origin (retired 7/98) Y-MP 300 Number of Users (retired 12/94) 200 C3880 SPP-1200 (retired 10/95) (retired 5/98) SPP-2000 100 CM-5 (retired 1/97) 0 Jul95 Jul96 Jul97 Jul98 Nov94 Jan95 Jan96 Nov96 Nov97 Jan98 Nov95 Jan97 Mar95 Sep95 Mar96 Sep96 Mar97 Sep97 Sep98 Sep94 Mar98 May95 May96 May97 May98

  6. Millions of NUs Used at NCSA FY93 to FY98

  7. Knight (67) Chen (125) Karniadakis (31) Hawley (6) Droegemier (24) Goddard (41) Goodrich (59) Kollman (10) Suen (4) Solomon (82) NCSA Supplies Cycles to a Logarithmic Demand Function of Projects FY98 Usage < 10 NUs : Evans, Ghoniem, Jacobs, Long, York Sugar (2) Super Large Medium Small Tiny

  8. Evolution of the NCSA Project Distribution Function FY93 to FY98

  9. Rapid Increase in Large projects at NCSA FY93-98

  10. Breakout in Supporting Super Projects at NCSA in the Last Year

  11. Migration of NCSA User Distribution Toward the High End Number of Projects +114% -27% +350% -79% +400%

  12. 800,000 26 700,000 600,000 500,000 NUs Per Quarter 400,000 24 300,000 16 200,000 15 12 7 8 100,000 0 1QFY97 2QFY97 3QFY97 4QFY97 1QFY98 2QFY98 3QFY98 Alliance LES Chose 27 Large PSC Projects to Track Out of 100 Targeted Projects Number of the 27 Projects Computing at NCSA Bar Shows NUs at NCSA Used Per Quarter Includes: Droegemeir, Freeman, Karplus, Kollman, Schulten, Sugar

  13. Disciplines Represented in the Large Academic Projects at the Alliance LES Over 5,000 NUs Annually Per Project >100 Projects Over 3.2 Million NUs Note Mapping to AT Teams: Nanomaterials Cosmology Chemical Engineering Molecular Biology Environmental Hydrology Astro and Bio Instruments 6/1/97 to 5/31/98 NCSA

  14. Application Performance Scaling on 128-Processor Origin Conclusion -- 128-Processor Origin is a 15 GF Machine (20-25% of Peak)

  15. Origin Brings Shared Memory to MPP Scalability

  16. 1,000,000,000 Total NU 70% Annual Growth This Year 100,000,000 10,000,000 Normalized CPU Hours Let’s Blow This Up! 1,000,000 100,000 10,000 1986 1990 1996 1998 2000 1988 1992 1994 2002 Fiscal Year The Growth Rate of the National Capacity is Slowing Down Again Source: Quantum Research

  17. The Drop in High End Capacity Available to National Academic Researchers Quantum Research FY96-98

  18. Projection Major Gap Has Developed in National Usage at NSF Supercomputer Centers

  19. Allocated Capacity for Meritorious NSF Large National Projects Doubled Data from NSF Metacenter and NRAC Reviews

  20. Clustered Shared Memory Computers are Today’s High End NCSA has 6 x 128 Origin Processors ASC has 4 x 128 ARL has 3 x 128 CEWES has 1 x 128 NAVO has 1 x 128 Los Alamos ASCI Blue Will Have 48 x 128! Livermore ASCI Blue has 1536x4 IBM SP

  21. High-End Computing Enables High Resolution of Flow Details 1024x1024x1024- A Billion Zone Computation of Compressible Turbulence This Simulation Run on Los Alamos SGI Origin Array U. Minn.SGI Visual Supercomputer Renders Images Vorticity LCSE, Univ of Minnesota www.lcse.umn.edu/research/lanlrun/

  22. Cycles Used by NSF Community at the NSF Supercomputer Centers by Vendor June 1, 1997 through May 31, 1998 CTC, NCSA, PSC, SDSC 1019 Projects Using 100% of the Cycles C/T90 Origin/PC T3D/E SGI SN1 is the Natural Upgrade for 84% of Cycles!

  23. Peak Teraflops in Aggressive Upgrade Plan

  24. Deputy Director Bordogna on NSF Leadership in Information Technologies • Three Important Priorities for NSF in the Area of IT for the Future: • The First Area Is Fundamental and High-Risk IT Research Advanced Computation Research. . • The Second Priority Area for NSF Is Competitive Access and Use of High-end Computing and Networking. • The Third Priority Is Investing in IT Education at All Levels.

  25. President’s Information Technology Advisory Committee Interim Report • More Long Term IT Research Needed • Fundamental Research in Software Development • R & D and Testbeds for Scalable Infrastructure • Increase Support for High End Computing • Socio-Economic and Workforce Impacts • Address the Shortage of High-Tech Workers • Study Social and Economic Impacts of IT Adoption • Modes and Management of Federal IT Research • Fund Projects of Broader Scope and Longer Duration • Virtual Centers for Expeditions into the 21st Century • NSF as Lead Agency for Coordinating IT Research Congressional Testimony 10/6/98

  26. PITAC Draft Refinement of High-End Acquisition Recommendation • Fund the Acquisition of the Most Powerful High-End Computing Systems to Support Long Term Basic Research in Science and Engineering • Access for (Highest Priority): • ALL Academic Researchers • ALL Disciplines • ALL Universities • Access for (Second Priority): • Government Researchers • Industrial Researchers

  27. Harnessing the Unused Cycles of Networks of Workstations Alliance Nanotechnologies Team Used Univ. of Wisconsin Condor Cluster - Burned 1 CPU-Year in Two Weeks! Condor Cycles University of Kansas is Installing Condor

  28. NT Workstation Shipments Rapidly Surpassing UNIX Source: IDC, Wall Street Journal, 3/6/98

  29. PACI Fostering Commodity Computing “Supercomputer performance at mail-order prices”-- Jim Gray, Microsoft • Andrew Chien, CS UIUC-->UCSD • Rob Pennington, NCSA • Reagan Moore, SDSC • Plan to Link UCSD & UIUC Clusters 128 Hewlett Packard 300 MHz Various Applications Sustain 7 GF on 128 Processors 64 Compaq 333 MHz

  30. Performance Analysis is Key Computer Science Research Enabling Computational Science Mflops/ProcFlops/ByteFlops/NetworkRT Cray T3E 1200 ~2 ~2,500 SGI Origin2000 500 ~0.5 ~1,000 HPVM NT Supercluster 300 ~3.2 ~6,000 IBM SP2 550 ~3.7 ~38,000 Berkeley NOW II 320 ~8.0 ~6,400 Beowulf (100Mbit) 300 ~25 ~500,000

  31. Performance of Scalable SystemsShows the Promise of Local Clustered PCs Solving 2D Navier-Stokes Kernel Danesh Tafti, Rob Pennington, NCSA; Andrew Chien (UIUC, UCSD)

  32. Near Perfect Scaling of Cactus - 3D Dynamic Solver for the Einstein GR Equations Cactus was Developed by Paul Walker, MPI-Potsdam UIUC, NCSA Ratio of GFLOPs Origin = 2.5x NT SC Paul Walker, John Shalf, Rob Pennington, Andrew Chien NCSA

  33. QCD Performance on Various Machines Doug Toussaint and Kostas Orginos, University of Arizona

  34. The Road to Intel’s MercedThe Convergence of Scientific and Commercial Computing IA-64 Co-Developed by Intel and Hewlett-Packard http://developer.intel.com/solutions/archive/issue5/focus.htm#FOUR

  35. User Web Browser User Input Output to User Results to User User Instructions and queries Application Programs (May have varying interfaces and be written in different languages) Information Sources (May be of varying formats) Workbench Server Instructions Queries Format Translator, Query Engine and Program Driver Results Information The NCSA Information Workbench - An Architecture for Web-Based Computing NCSA Computational Biology Group

  36. Genomes Populations& Evolution Structure & Function Gene Products Pathways & Physiology Ecosystems Using a Web Browser to Run Programs and Analyze Data Worldwide NCSA Biology Workbench Has Over 6,000 Users From Over 20 Countries

More Related