1 / 34

BLUE GENE/L

BLUE GENE/L. Sapnah Aligeti CMPS 5433. Outline. History about supercomputers Manufacturers / Partners of Blue Gene/L Why was it created? Who are the customers? How much does it cost? Processors / Memory / Scalability Stepwise Structure Hardware Architecture Interconnection Network

livia
Télécharger la présentation

BLUE GENE/L

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BLUE GENE/L Sapnah Aligeti CMPS 5433

  2. Outline • History about supercomputers • Manufacturers / Partners of Blue Gene/L • Why was it created? • Who are the customers? • How much does it cost? • Processors / Memory / Scalability • Stepwise Structure • Hardware Architecture • Interconnection Network • Software • Advantages • Applications

  3. A LITTLE ABOUT SUPERCOMPUTERS…… • IBM’s Naval Ordnance Research Calculator. • IBM's Blue Gene/L.

  4. ……A LITTLE ABOUT SUPERCOMPUTERS (CONTD) 360000000000000 floating-point operations per second (TFLOPS) in March, 2005. 15,000 operations per second.

  5. ……A LITTLE ABOUT SUPERCOMPUTERS (CONTD)

  6. MANUFACTURER / PARTNERS • 1999 - 100M $ PROJECT BY IBM FOR THE US DEPT OF ENERGY (DOE) - BLUE GENE/L - BLUE GENE/C (CYCLOPS) - BLUE GENE/P (PETAFLOPS) • 2001 - PARTNERSHIP WITH LAWRENCE LIVEMORE NATIONAL LABORATORY (FIRST CUSTOMER)

  7. TWO MAIN GOALS OF BLUE GENE/L • to build a new family of supercomputers optimized for bandwidth, scalability and the ability to handle large amounts of data while consuming a fraction of the power and floor space required by today's fastest systems. • to analyze scientific and biological problems (protein folding).

  8. CUSTOMERS • 64 rack machine to Lawrence Livermore National Laboratory, California • 23 Feb 2004 – 6 rack machine to ASTRON, a leading astronomy organization in the Netherlands to use IBM's Blue Gene/L supercomputer technology as the basis to develop a new type of radio telescope capable of looking back billions of years in time. • May/June 2004 – 1 rack system to Argonne National Laboratory, Illinois • Sept 2004 IBM - 4 rack Blue Gene/L supercomputer to Japan's National Institute of Advanced Industrial Science and Technology (AIST) to investigate the shapes of proteins. • 6 Jun 2005 - 4 rack machine to The Ecole Polytechnique Federale de Lausanne (EPFL), in Lausanne, Switzerland to simulate the workings of the human brain .

  9. COST • The initial cost was 1.5 M $/rack • The current cost is 2M $/rack • March 2005 – IBM started renting the machine for about $10,000 per week to use one-eighth of a Blue Gene/L rack.

  10. PROCESSORS / MEMORY / SCALABILITY PROCESSOR • 65,536 DUAL PROCESSOR NODES. • 700 MHZ POWER PC 440 PROCESSOR. MEMORY • 512 MB of dynamic random access memory (DRAM) per node. SCALABILITY • BLUE GENE/L IS JUST THE FIRST STEP………

  11. THE BLUE GENE/LTHE STEPWISE STRUCTURE……

  12. THE BLUE GENE/L

  13. THE BLUE GENE/LTHE RACK/CABINET

  14. THE BLUE GENE/LTHE NODE CARD

  15. THE BLUE GENE/LTHE COMPUTE CARD

  16. THE BLUE GENE/LTHE CHIP

  17. THE BLUE GENE/L

  18. THE BLUE GENE/L

  19. THE BLUE GENE/L

  20. BLUE GENE/L I/O ARCHITECTURE Architecture: Top view:

  21. HARDWARE • 65,356 Compute nodes • ASIC (Application-Specific Integrated Circuit) • ASIC includes two 32-bit PowerPC 440 processing cores, each with two 64-bit FPUs (Floating-Point Units) • compute nodes strictly handle computations • 1024 i/o nodes • manages communications for a group of 64 compute nodes. • 5 Network connections

  22. Interconnection Network • 3D Torus • Global tree • Global interrupts • Ethernet • Control

  23. 3D TORUS n/w FOR 64 NODES (4 * 4 * 4) • http://hpc.csie.thu.edu.tw/docs/Tutorial.pdf

  24. Torus n/w (contd) • Primary connection • Torus n/w connects all the 65,536 compute nodes (32 * 32 * 64). • One node connects to 6 other nodes. • Chosen because provides high bandwidth nearest neighbor connectivity • Single node consists of single ASIC and memory. • Dynamic adaptive routing.

  25. SOFTWARE • The main parallel programming model for BG/L is message passing using MPI (message passing interface) in C, C++, or FORTRAN. • Supports global address space programming models such as Co-Array FORTRAN (CAF) and Unified Parallel C (UPC). • The I/O and external front-end nodes run Linux, and the compute nodes run a kernel that is inspired by Linux.

  26. Advantages • Scalable • Less space (half of the tennis court) • Heat problems most supercomputers face • Speed

  27. Limitations • Memory Limitation (512 MB/node) • Simple node kernel (does not support forks, threads)

  28. Applications • BLUE BRAIN PROJECT, 6 JUNE IBM and Ecole Polytechnique Fédérale de Lausanne (EPFL), in Switzerland to study the behavior of the brain and model it. • PROTEIN FOLDING Alzheimer’s disease

  29. Future developments???? • Article published in “THE STANDARD”, china’s business newspaper dated May 29 • Military hopes such a development will allow pilots to control jets using their mind • Allow wheelchair users to walk

  30. References • IBM, Journal of Research and Development, volume 49, November 2005. • Goolge News. • http://www.linuxworld.com/read/48131.htm • http://sc-2002.org/paperpdfs/pap.pap207.pdf • http://www.ipab.org/Presentation/sem04/04-02-2.pdf • http://www.desy.de/dvsem/WS0405/steinmacherBurow-20050221.pdf • www.scd.ucar.edu/info/UserForum/presentations/loft.ppt

  31. THANK YOU!! QUESTIONS???

  32. ASIC

  33. GENERAL CONNECTION

  34. What is a kernel?? • In computer science, the kernel is the fundamental part of an operating system. It is a piece of software responsible for providing secure access to the machine's hardware to various computer programs. Since there are many programs, and access to the hardware is limited, the kernel is also responsible for deciding when and how long a program should be able to make use of a piece of hardware, which is called multiplexing. Accessing the hardware directly can be very complex, so kernels usually implement some hardware abstractions to hide complexity and provide a clean and uniform interface to the underlying hardware, which helps application programmers.

More Related