1 / 72

Challenges to High Productivity Computing Systems and Networks

The 3rd International Conference on Emerging Ubiquitous Systems and Pervasive Networks www.iasks.org/conferences/EUSPN2011 Amman, Jordan October 10-13, 2011. Challenges to High Productivity Computing Systems and Networks. Mohammad Malkawi Dean of Engineering, Jadara University

fausta
Télécharger la présentation

Challenges to High Productivity Computing Systems and Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The 3rd International Conference on Emerging Ubiquitous Systems and Pervasive Networkswww.iasks.org/conferences/EUSPN2011Amman, JordanOctober 10-13, 2011

  2. Challenges to High Productivity Computing Systems and Networks Mohammad Malkawi Dean of Engineering, Jadara University mmalkawi@aimws.com

  3. Outline • High Productivity Computing Systems (HPCS) - The Big Picture • The Challenges • IBM PERCS • Cray Cascade • SUN Hero Program • Cloud Computing

  4. HPCS: The Big Picture • Manufacture and deliver a peta-flop class computer • Complex architecture • High performance • Easier to program • Easier to use

  5. HPCS Goals • Productivity • Reduce code development time • Processing power • Floating point & integer arithmetic • Memory • Large size, high bandwidth & low latency • Interconnection • Large bisection bandwidth

  6. HPCS Challenges • High Effective Bandwidth • High bandwidth/low latency memory systems • Balanced System Architecture • Processors, memory, interconnects, programming environments • Robustness • Hardware and software reliability • Compute through failure • Intrusion identification and resistance techniques.

  7. HPCS Challenges • Performance Measurement and Prediction • New class of metrics and benchmarks to measure and predict performance of system architecture and applications software • Scalability • Adapt and optimize to changing workload and user requirements; e.g., multiple programming models, selectable machine abstractions, and configurable software/hardware architectures

  8. Productivity Challenges • Quantify productivity for code development and production • Identify characteristics of • Application codes • Workflow • Bottlenecks and obstacles • Lessons learned so that decisions by the productivity team and the vendors are based on real data rather than anecdotal data

  9. Figure 2: Defect Arrival Rate for R8, R9 and R10 Did Not Learn the Lessons

  10. Productivity Dilemma - 1 • Diminishing productivity is alarming • Coding • Debugging • Optimizing • Modifying • Over-provisioning hardware • Running high-end applications

  11. Productivity Dilemma - 2 • Not long ago, a computational scientist could personally write, debug and optimize code to run on a leadership class high performance computing system without the help of others. • Today, the programming for a cluster of machines is significantly more difficult than traditional programming, and the scale of the machines and problems has increased more than 1,000 times.

  12. Productivity Dilemma - 3 • Owning and running high-end computational facilities for nuclear research, seismic modeling, gene sequencing or business intelligence, takes sizeable investment in terms of staffing, procurement and operations. • Applications achieve 5 to 10 percent of the theoretical peak performance of the system. • Applications must be restarted from scratch every time a hardware or software failure interrupts the job.

  13. HPCS Trends: Productivity Crisis

  14. High Productivity Computing Scaling the Program Without Scaling the Programmer Bandwidth enables productivity and allows for simpler programming environments and systems with greater fault tolerance

  15. Language Challenges • MPI is a fairly low-level language • Reliable, predictable and works. • Extension of Fortran, C and C++ • New languages with higher level of abstraction • Improve legacy applications • Scale to Petascale levels • SUN – Fortress • IBM - X10 • Cray – Chapel • Open MP

  16. Global View Programming Model • Global View programs present a single, global view of the program's data structures, • Begin with a single main thread. • Parallel execution then spreads out dynamically as work becomes available.

  17. Unprecedented Performance Leap • Performance targets require aggressive improvements in system parameters traditionally ignored by the "Linpack" benchmark. • Improve system performance under the most demanding benchmarks (GUPS) • Determine whether general applications will be written or modified to benefit from these features.

  18. Trade-Offs • Portability versus innovations • Abstractions vs. difficulty of programming and performance overhead • Shared memory versus message passing

  19. Cost of Petascale Computing • Require petabytes of memory • Order of 106 processors • Hundreds of petabytes of disk storage for capacity and bandwidth. • Power consumption and cost for DRAM and disks (Tens of Mega Watts) • Operational cost

  20. The DARPA HPCS Program • First major program to devote effort to make high end computers more user-friendly • Mask the difficulty of developing and running codes on HPCS • Mask the challenge of getting good performance for a general code • Fast, large, and low latency RAM • Fast processing • Quantitative measure of productivity

  21. IBM HPCS EXAMPLE

  22. IBM HPCS Program – PERC 2011 • Productive, Easy-to-use, Reliable Computer • Rich programming environment • Develop new applications and maintain existing ones. • Support existing programming models and languages • Scalability to the peta-level • Automate performance tuning tasks • Rich graphical interfaces • Automate monitoring and recovery tasks • Fewer system administrators to handle larger systems more effectively

  23. IBM Blue Gene – HPCS Base

  24. IBM Approach - Hardware • Innovative processor chip design & leverage the POWER processor server line. • Lower Soft Error Rates (SER) • Reduce the latency of memory accesses by placing the processors close to large memory arrays. • Multiple chip configuration to suit different workloads.

  25. IBM Approach - Software • Large set of tools integrated into a modern, user-friendly programming environment. • Support both legacy programming models and languages (MPI, OpenMP, C, C++, Fortran, etc.), • Support emerging ones (PGAS) • Design new experimental programming language, called X10.

  26. X10 Features • Designed for parallel processing from the ground up. • Falls under the Partitioned Global Address Space (PGAS) category • Balance between a high-level abstraction and exposing the topology of the system • Asynchronous interactions among the parallel threads • Avoid the blocking synchronization style

  27. CRAY HPCS EXAMPLE

  28. Multiple Processing Technologies • In high performance computing: one size does not fit all • Heterogeneous computing using custom processing technologies. • Performance achieved via deeper pipelining and more complex microarchitectures • Introduction of multi-core processors: • Further stresses processor-memory balance issues • Drives up the number of processors required to solve large problems

  29. Specialized Computing Technologies • Vector processing and field programmable gate arrays (FPGAs) • Ability to extract more performance out of the transistors on a chip with less control overhead. • Allow higher processor performance, with lower power • Reduce the number of processors required to solve a given problem • Vector processors tolerate memory latency extremely well

  30. Specialized Computing Technologies • Multithreading improve latency tolerance • Cascade design will combine multiple computing technologies • Pure scalar nodes, based on Opteron microprocessors • Nodes providing vector, massively multithreaded, and FPGA-based acceleration. • Nodes that can adapt their mode of operation to the application.

  31. Cray: The Cascade Approach • Scalable, high-bandwidth system • Globally addressable memory • Heterogeneous processing technologies • Fast serial execution • Massive multithreading • Vector processing and FPGA-based application acceleration. • Adaptive supercomputing: • The system adapts to the application rather than requiring the programmer to adapt the application to the system.

  32. Cascade Approach • Use Cray T3ETM massively parallel system • Use best-of-class microprocessor • Processors directly access global memory with very low overhead and at very high data rates. • Hierarchical address translation allows the processors to access very large data sets without suffering from TLB faults • AMD's Opteron will be the base processor for Cascade

  33. Cray – Adaptive Supercomputing • The system adapts to the application • The user logs into a single system, and sees one global file system. • The compiler analyzes the code to determine which processing technology best fits the code • The scheduling software automatically deploys the code on the appropriate nodes.

  34. Balanced Hardware Design • A balanced hardware design • Complements processor flops with memory, network and I/O bandwidth • Scalable performance • Improving programmability and breadth of applicability. • Balanced systems also require fewer processors to scale to a given level of performance, reducing failure rates and administrative overhead.

  35. Cray- System Bandwidth Challenge • The Cascade program is attacking this problem on two fronts • Signalling technology and • Network design. • Provide truly massive global bandwidth at an affordable cost. • A key part of the design is a common, globally addressable memory across the whole machine. • Efficient, low-overhead communication.

  36. Cray- System Bandwidth Challenge • Accessing remote data is as simple as issuing a load or store instruction, rather than calling a library function to pass messages between processors. • Allows many outstanding references to be overlapped with each other and with ongoing computation.

  37. Cray Programming Model • Support MPI for legacy purposes • Unified Parallel C (UPC) and Coarray Fortran (CAF) • simpler and easier to write than MPI • Reference memory on remote nodes as easily as referencing memory on the local node • Data sharing is much more natural • Communication overhead is much lower.

  38. The Chapel – Cray HPCS Language • Support for graphs, hash tables, sparse arrays, and iterators. • Ability to separate the specification of an algorithm from structural details of the computation including • Data layouts • Work decomposition and communication. • Simplifies the creation of the basic algorithms • Allows these structural components to be gradually tuned over time.

  39. Cray's Programming Tools • Reduce the complexity of working on highly scalable applications. • The Cascade debugger solution will • Focus on data rather than control • Support application porting • Allow scaling commensurate with the application • Integrated user environment (IDE)

  40. Cascade Performance Analysis Tools • Hardware performance counters • Software introspection techniques. • Present the user with insight, rather than statistics. • Act as a parallel programming expert • Provide high-level feedback on program behaviour • Provide suggestions for program modifications to remove key bottlenecks or otherwise improve performance.

  41. SUN HPCS EXAMPLE

  42. Evolution of HPCS at SUN • Grid: • Loosely coupled heterogeneous resources • Multiple administrative domains • Wide area network • Clusters • Tightly coupled high performance systems • Message passing – MPI • Ultrascale • Distributed scalable systems • High productivity shared memory systems • High bandwidth, global address space, unified administration tools

  43. SUN Approach – The Hero System • Rich bandwidth • Low latencies • Very high levels of fault tolerance • Highly integrated toolset to scale the program and not the programmers • Multithreading technologies ( > 100 concurrent threads)

  44. SUN Approach – The Hero System • Globally addressable memory • System level and application checkpointing • Hardware and software telemetry for dramatically improved fault tolerance. • The system appears more like a flat memory system • Focus on solving the problem at hand rather than making elaborate efforts to distribute data in a robust manner.

  45. Definition: Bisection Bandwidth A standard metric for system’s ability to globally move data Example is an all-to-all interconnect between 8 cabinets There are 28 total connections, of which 16 cross the bisection (orange) and 12 do not (blue) High bandwidth optical connections are key to meeting HPCS peta-scale bisection bandwidth target Split a system into equal halves such that there is the minimum number of connections across the split- the bandwidth across the split is the bisection bandwidth

  46. System Bandwidth Over Time A giant leap in productivity expected

  47. High Bandwidth Required by HPCS Radical Changes From Today’s Architecture Necessary

  48. Motivation for Higher Bandwidth

  49. Growing BW demand in HPCS • Multicore CPUs: Aggregation of multiple cores is unstoppable and copper interconnects are stressed at very large scale • Silicon Photonics is the solution since it brings a potential of unlimited BW on the best medium allowing for large aggregation of multicore CPUs

  50. Growing BW demand in HPCS • Clusters are growing in number of nodes and in performance/node • Interconnects are the limiting factor in BW, latency, distance • Protocols reduce latency & copper increases latency. • Silicon Photonics brings high BW and low latency

More Related