1 / 15

State of the CCS

State of the CCS. SOS 8 April 13, 2004 James B. White III (Trey) trey@ornl.gov virtual Buddy Bland. State of the CCS. CCS as a user facility CCS as a DOE Advanced Computing Research Testbed (ACRT) Future plans. Facilities. Computer facility 40,000 ft 2 over two floors

harva
Télécharger la présentation

State of the CCS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. State of the CCS SOS 8 April 13, 2004 James B. White III (Trey) trey@ornl.gov virtual Buddy Bland

  2. State of the CCS • CCS as a user facility • CCS as a DOE Advanced Computing Research Testbed (ACRT) • Future plans

  3. Facilities • Computer facility • 40,000 ft2 over two floors • 36” raised floor (lower floor) • 8 MW power, 3600 tons cooling • Office space for 450 • Classrooms and training areas • Labs for visualization, computer science, and networking

  4. User Facility • CCS designated by DOE as a user facility • Supports users from academia and industry • Pursuing agreements with Boeing and Dow Chemical

  5. 70% of usage is from users outside of ORNL User Community Users come from all around the country

  6. FY03 Usage by Discipline

  7. CCS Usage Model • Small number of large projects • CCS supports liaisons for large projects • Center can be dedicated to single task of national importance • Human genome • HFIR restart • IPCC

  8. Advanced Computing Research Testbed • ACRT examines promising new computer architectures for DOE SC • Determine usability for SC applications • Work with vendors to improve systems • Application-based evaluations

  9. Intel i/PSC-2 1988 SRC Prototype 1999 KSR-1 1991 GSN Switch 2000 Compaq AlphaServer SC 2000 IBM Power4 and Federation 2001-2004 Intel Paragon XP/S-35 1992 Intel Paragon MP X/PS-150 1995 IBM S80 1999 Intel I/PSC-860 1990 IBM Winterhawk And Nighthawk 1999 Past Evaluations

  10. Current Evaluations • Cray X1 - scalable vector • SGI Altix - large shared memory • IBM Federation Cluster - interconnect • http://www.csm.ornl.gov/evaluation/

  11. Cray X1 • World’s largest X1 • 8 cabinets • 256 MSPs, 3.2 TF • 1 TB memory • 32 TB local disk • Cabinets half populated to test topology and facilitate upgrade

  12. SGI Altix • Large memory, single-system image • 256 Itanium2 processors • 1.5 GHz, 6 GF, 6 MB cache • 1.5 TF • 2 TB shared memory (NUMA) • Targeting biology apps and data analysis

  13. 27 p690s 32 1.3-GHz Power4s 864 total processors 8 p655s 4 1.7-GHz Power4s Login and GPFS IBM Federation Cluster Federation vs. Colony Latency (µs) 12 19 bandwidth (MBs) 551 306 exchange(1x1)(MBs) 767 273 exchange (32x32) 2199 394 bisection (2 nodes) 619 284 bisection (32x32) 922 321

  14. Cray X series Upgrade X1 to 512 MSPs Upgrade to 1024 X1E MSPs Black Widow Red Storm 10.5 TF in 2004 21 TF in 2005 Blue Gene at Argonne Cray XD1 (OctigaBay) SRC FPGA systems IBM Power5 SGI Altix (larger images) ADIC StorNext Lustre Evaluation Plans

  15. Questions? James B. White III (Trey) trey@ornl.gov http://www.ccs.ornl.gov/ http://www.csm.ornl.gov/evaluation/

More Related