250 likes | 376 Vues
This paper discusses the innovations in hybrid performance simulation using Statistical and Symbolic Execution (HLS) developed by Mark Oskin, Fred Chong, and Matthew Farrens at UC Davis. HLS provides fast and accurate simulation, enabling the exploration of performance characteristics in complex designs without the exhaustive time commitment traditionally required. The benefits of HLS include interactive simulation capabilities, the ability to adapt parameters dynamically, and effective performance profiling for design spaces. The findings highlight its advantages over traditional simulation methods and suggest future research directions.
E N D
Combining Statistical and Symbolic Simulation Mark Oskin Fred Chong and Matthew Farrens Dept. of Computer ScienceUniversity of California at Davis
Overview • HLS is a hybrid performance simulation • Statistical + Symbolic • Fast • Accurate • Flexible
Motivation I-cache hit rate Basic block size Dispatch bandwidth I-cache miss penalty Branch miss-predict penalty
Motivation • Fast simulation • seconds instead of hours or days • Ideally is interactive • Abstract simulation • simulate performance of unknown designs • application characteristics not applications
Outline • Simulation technologies and HLS • From applications to profiles • Validation • Examples • Issues • Conclusion
Design Flow with HLS Cycle-by- Cycle Simulation Estimate Performance Profile Possible solution Design Issue HLS Design Issue Design Issue
Traditional Simulation Techniques • Cycle-by-cycle (Simplescalar, SimOS,etc.) + accurate – slow • Native emulation/basic block models (Atom, Pixie) + fast, complex applications – useful to a point (no low-level modifications)
Statistical / Symbolic Execution • HLS + fast (near interactive) + accurate / – within regions + permits variation of low-level parameters + arbitrary design points / – use carefully
L1 I-cache Main Memory L2 Cache Branch Predictor Fetch Unit Out of order Dispatch Unit Out of order Execution core Out of order Completion Unit L1 D-cache HLS: A Superscalar Statistical and Symbolic Simulator Statistical Symbolic
Workflow Code sim-stat Binary sim-outorder machine-profile app profile R10k Stat-binary machine-configuration HLS
Machine Configurations • Number of Functional units (I,F,[L,S],B) • Functional unit pipeline depths • Fetch, Dispatch and completion bandwidths • Memory access latencies • Mis-speculation penalties
Profiles • Machine profile: • cache hit rates => () • branch prediction accuracy => () • Application profile: • basic block size => (,) • instruction mix (% of I,F,L,S,B) • dynamic instruction distance (histogram)
Statistical Binary • 100 basic blocks • Correlated: • random instruction mix • random assignment of dynamic instruction distance • random distribution of cache and branch behaviors
Statistical Binary dynamic instruction distance branch predictor behavior load (l1 i-cache, l2 i-cache, l1 d-cache l2 d-cache, dependence 0) integer (l1 i-cache, l2 i-cache, dependence 0, dependence 1) integer (l1 i-cache, l2 i-cache, dependence 0, dependence 1) branch (l1 i-cache, l2 i-cache, branch-predictor accr., dep 0, dep 1) store (l1 i-cache, l2 i-cache, l1 d-cache l2 d-cache, dep 0, dep 1) load (l1 i-cache, l2 i-cache, l1 d-cache l2 d-cache, dependence 0) core functional unit requirements cache behavior during I-fetch cache behavior during data access
integer (...) branch (...) store (...) load (...) integer (...) branch (...) load (...) integer (..) HLS Instruction Fetch Stage Fetches symbolic instructions and interacts with a statistical memory system and branch predictor model. Similar to conventional instruction fetch: - has a PC- has a fetch window- interacts with caches- utilizes branch predictor- passes instructions to dispatch Differences: - caches and branch predictor are statistical models
HLS Multi-value Validation with SimpleScalar HLS Simple-Scalar (Perl)
HLS Multi-Value Validation with SimpleScalar HLS Simple-Scalar (Xlisp)
Example use of HLS An intuitive result: branch prediction accuracy becomes less important (crosses fewer iso-IPC contour lines, as basic block size increase). (Perl)
Example use of HLS Another intuitive result: gains in IPC due to basic block size are front-loaded Trade-off between front-end (fetch/dispatch) and back-end (ILP) processor performance (Perl)
Example use of HLS This space intentionally left blank. (Perl)
Related work • R. Carl and J.E. Smith. Modeling superscalar processors via statistical simulation - PAID Workshop - June 1998. • N. Jouppi. The non-uniform distribution of instruction-level and machine parallelism and its effect on performance. - IEEE Trans. 1989. • D. Noonburg and John Shen. Theoretical modeling of superscalar processor performance - MICRO27 - November 1994.
Questions & Future Directions • How important are different well-performing benchmarks anyway? • easily summarized • summaries are not precise => yet precise enough • Will the statistical+symbolic technique work for poorly behaved applications? • Will it extend to deeper pipelines and more real processors (i.e. Alpha, P6 architecture)?
Conclusion • HLS: Statistical + Symbolic Execution • Intuitive design space exploration • Fast • Accurate • Flexible • Validated against cycle-by-cycle and R10k • Future work: deeper pipelines, more hardware validations, additional domains • source code at: http://arch.cs.ucdavis.edu/~oskin