1 / 12

this is important for machine procurements and for understanding where HPC technology is heading

HPC Benchmarking and Performance Evaluation With Realistic Applications Brian Armstrong, Hansang Bae, Rudolf Eigenmann, Faisal Saied, Mohamed, Sayeed, Yili Zheng Purdue University. Benchmarking has two important goals 1. Assess the performance of High-Performance Computer Platforms

lave
Télécharger la présentation

this is important for machine procurements and for understanding where HPC technology is heading

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HPC Benchmarking and Performance Evaluation With Realistic ApplicationsBrian Armstrong, Hansang Bae, Rudolf Eigenmann, Faisal Saied, Mohamed, Sayeed, Yili ZhengPurdue University Benchmarking has two important goals 1. Assess the performance of High-Performance Computer Platforms 2. Measure and show opportunities for progress in HPC this is important for machine procurements and for understanding where HPC technology is heading this is important to quantify and compare scientific research contributions and to set new directions for research

  2. Why Talk About Benchmarking? There is no progress if you can’t measure it • use benchmark applications unknown to others; give no reference • use applications that have the same name as known benchmarks, but that show better performance of your innovation • use only those benchmarks out of a suite that show good performance on your novel technique • use only the benchmarks out of the suite that don’t break your technique • modify the benchmark source code • change data set parameters • use the “debug” data set • use a few loops out of the full programs only • measure loop performance but label it as the full application • don’t mention in your paper why you have chosen the benchmarks in this way and what changes you have made • time the interesting part of the program only; exclude overheads • measure interesting overheads only, exclude large unwanted items 12 ways to fool the scientist (with computer performance evaluation)

  3. Benchmarks Need to be Representative and Open • Representative Benchmarks: • Represent real problems • Open Benchmarks: • No proprietary strings attached • Source code and performance data can be freely distributed With these goals in mind, SPEC’s High-performance group was formed in 1994

  4. Why is Benchmarking with Real Application Hard? • Simple Benchmarks are Overly Easy to Run • Realistic Benchmarks Cannot be Abstracted from Real Applications • Today's Realistic Applications May Not be Tomorrows Applications • Benchmarking is not Eligible for Research Funding • Maintaining Benchmarking Efforts is Costly • Proprietary Full-Application Benchmarks Cannot Serve as Yardsticks

  5. SPEC HPC2002 • Includes three (suites of) codes • SPECchem, used in chemical and pharmaceutical industries (gamess) • 110,000 lines of Fortran and C • SPECenv, weather forecast application (WRF) • 160,000 lines of Fortran and C • SPECseis, used in the search for oil and gas • 20,000 lines of Fortran and C • All codes include several data sets and are available in a serial and a parallel variant (MPI, OpenMP, hybrid execution is possible). • SPEC HPC used in TAP list - Top Application Performers • the rank list of HPC systems based on realistic applications • www.purdue.edu/TAPlist Emphasis on most realistic applications, no programming model favored

  6. Ranklist of Supercomputers based on Realistic Applications (SPEC HPC, medium data set)

  7. Can We Learn the Same from Kerrnel Benchmarks?

  8. MPI Communication(percent of overall runtime)

  9. I/O BehaviorDisk Read/Write Times and Volumes Only SPECseis has parallel I/O. SPECenv and SPECchem perform I/O on a single processor. HPL has no I/O

  10. I/O Volume, Time, and Effective Bandwidth

  11. Memory Footprints

  12. Conclusions • There is a dire need for basing performance evaluation and benchmarking results on realistic applications. • The SPEC HPC the main criteria for real-application benchmarking: relevance and openness. • Kernel benchmarks are the best choices for measuring individual system components. However, there is a large range of questions that can only be answered satisfactorily using real-application benchmarks. • Benchmarking with real applications is hard and there are many challenges, but there is no replacement.

More Related