60 likes | 172 Vues
This workshop explores the critical need for standardized benchmarks in heterogeneous computing environments. While benchmarks are essential for evaluating price and performance, challenges arise when license restrictions prevent code comparison. Participants will learn to guide their own parallel runs effectively, recognize the limitations when comparing different systems, and utilize automatic processor selection. We will address specific workloads and propose on-the-fly benchmarking techniques applicable to diverse configurations such as GPU, CPU, and FPGA systems, paving the way for efficient performance assessments.
E N D
Parallelization of CC WorkshopBenchmark Suggestion Sudhakar Pamidighantam NCSA
General Benchmark needs • Benchmarks standardization is important but comparison with/between codes could be problematic if license statements prohibit such activity • The information should provide users a guide lines to their own parallel runs • The benchmarks between heterogeneous systems may not be comparable except for total time to solution • Benchmarking is used to evaluate systems for price/performance and should be a continuous process • NSF has a set that could be a start if we want one
Heterogeneous systems • GPU/CPU/FPGA count and their usage • Cache amounts and bandwidths
Goal for Benchmarks • Automatic processor count/type selection • Problem specificity • On the fly benchmarking for specific distribution cpu/io
Systems • NSF Set • Natural systems easy to systematically grow like benzene--- hexacene…./Polymer/Argon clusters to define some constant work/data per processing unit Method dependent