1 / 30

Optimizing Compilers CISC 673 Spring 2009 Dynamic Compilation II

Optimizing Compilers CISC 673 Spring 2009 Dynamic Compilation II. John Cavazos University of Delaware. What is in a Dynamic Compiler?. Interpretation Popular approach for high-level languages Ex, Python, APL, SNOBOL, BCPL, Perl, MATLAB Useful for memory-challenged environments

jacob-byers
Télécharger la présentation

Optimizing Compilers CISC 673 Spring 2009 Dynamic Compilation II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimizing CompilersCISC 673Spring 2009Dynamic Compilation II John Cavazos University of Delaware

  2. What is in a Dynamic Compiler? • Interpretation • Popular approach for high-level languages • Ex, Python, APL, SNOBOL, BCPL, Perl, MATLAB • Useful for memory-challenged environments • Low startup time & space overhead, but much slower than native code execution • MMI (Mixed Mode Interpreter) [Suganauma’01] • Fast interpreter implemented in assembler

  3. What is in a Dynamic Compiler? • Quick compilation • Reduced set of optimizations for fast compilation, little inlining • Full compilation • Full optimizations only for selected hot methods • Classic just-in-time compilation • Compile methods to native code on first invocation • Ex, ParcPlace Smalltalk-80, Self-91 • Initial high (time & space) overhead for each compilation • Precludes use of sophisticated optimizations (eg. SSA) • Responsible for many of today’s myths

  4. Interpretation vs JIT Execution: 20 time units Execution: 2000 time units

  5. Selective Optimization Hypothesis: most execution is spent in a small percentage of methods Idea: use two execution strategies 1. Interpreter or non-optimizing compiler 2. Full-fledged optimizing compiler Strategy: • Use option 1 for initial execution of all methods • Profile to find “hot” subset of methods • Use option 2 on this subset

  6. Selective Optimization Selective opt: compiles 20% of methods, representing 99% of execution time Execution: 20 time units Execution: 2000 time units

  7. Designing an Adaptive Optimization System • What is the system architecture? • What are the profiling mechanisms and policies for driving recompilation? • How effective are these systems?

  8. Basic Structure of a Dynamic Compiler Still needs good core compiler - but more Machine code Program Structural inlining unrolling loop perm Scalar cse constants expressions Memory scalar repl ptrs Reg. Alloc Scheduling peephole

  9. Executing Program Program Basic Structure of a Dynamic Compiler Instrumented code Raw Profile Data History prior decisions compile time Optimizations Profile Processor Interpreter or Simple Translation Processed Profile Compiler subsystem Compilation decisions Controller

  10. Method Profiling • Counters • Call Stack Sampling • Combinations

  11. Method Profiling: Counters • Insert method-specific counter on method entry and loop back edges • Counts how often a method is called and approximates how much time is spent in a method • Very popular approach: Self, HotSpot • Issues: overhead for incrementing counter can be significant • Not present in optimized code

  12. Method Profiling: Counters foo ( … ) { fooCounter++; if (fooCounter > Threshold) { recompile( … ); } . . . }

  13. Method Profiling: Call Stack Sampling • Periodically record which method(s) are on call stack • Approximates amount of time spent in each method • Can be compiled into the code • Jikes RVM, JRocket • or use hardware sampling • Issues: timer-based sampling is not deterministic

  14. A B C Method Profiling: Call Stack Sampling A A A A A B B B B ... ... C C Sample

  15. Method Profiling Mixed • Combinations • Use counters initially and sampling later on • IBM DK for Java foo ( … ) { fooCounter++; if (fooCounter > Threshold) { recompile( … ); } . . . } A B C

  16. Method Profiling Mixed • Software Hardware Combination • Use interupts & sampling foo ( … ) { if (flag is set) { sample( … ); } . . . } A B C

  17. Recompilation Policies: Which Candidates to Optimize? Problem: given optimization candidates, which should be optimized? • Counters: • Optimize method that surpasses threshold • Simple, but hard to tune, doesn’t consider context • Optimize method on the call stack based on inlining policies • Addresses context issue • Call Stack Sampling: • Optimize all methods that are sampled • Simple, but doesn’t consider frequency of sampled methods • Use Cost/benefit model • Seemingly complicated, but easy to engineer • Maintenance free • Naturally supports multiple optimization levels

  18. Jikes RVM: Recompilation Policy – Cost/Benefit Model • Define • cur, current opt level for method m • Exe(j), expected future execution time at level j • Comp(j), compilation cost at opt level j • Choose j > cur that minimizes Exe(j) + Comp(j) • If Exe(j) + Comp(j) < Exe(cur) recompile at level j • Assumptions • Sample data determines how long a method has executed • Method will execute as much in the future as it has in the past • Compilation cost and speedup are offline averages

  19. Startup Programs: Jikes RVM [Hind et al.’04] No FDO, Mar’04, AIX/PPC

  20. Startup Programs: Jikes RVM No FDO, Mar’04, AIX/PPC

  21. Steady State: Jikes RVM No FDO, Mar’04, AIX/PPC

  22. Steady State: Jikes RVM

  23. Feedback-Directed Optimization (FDO) • Exploit information gathered at run-time to optimize execution • “selective optimization”: what to optimize • “FDO” :how to optimize

  24. Advantages of FDO • Can exploit dynamic information that cannot be inferred statically • System can change and revert decisions when conditions change • Runtime binding allows more flexible systems

  25. Challenges for automatic online FDO • Compensate for profiling overhead • Compensate for runtime transformation overhead • Account for partial profile available and changing conditions

  26. Profiling for What to Do • Clients • Inlining, unrolling, method dispatch • Dispatch tables, synchronization services, GC • Pretching • Misses, Hardware performance monitors [Adl-Tabatabai et al.’04] • Code layout • values - loop counts • edges & paths

  27. Profiling for What to Do • Myth: Sophisticated profiling is too expensive to perform online • Reality: Well-known technology can collect sophisticated profiles with sampling and minimal overhead

  28. Method Profiling Timer Based if (flag) handler(); • Useful for more than profiling • Jikes RVM • Schedule garbage collection • Thread scheduling policies, etc. if (flag) handler(); class Thread scheduler (...) { ... flag = 1; } void handler(...) { // sample stack, perform GC, swap threads, etc. .... flag = 0; } foo ( … ) { // on method entry, exit, & all loop backedges if (flag) { handler( … ); } . . . } A if (flag) handler(); B C

  29. Arnold-Ryder [PLDI 01]: Full Duplication Profiling • Generate two copies of a method • Execute “fast path” most of the time • Execute “slow path” with detailed profiling occassionally • Adapted by J9 due to proven accuracy and low overhead

  30. Suggested ReadingDynamic Compilation • Adaptive optimization in the Jalapeno JVM, M. Arnold, S. Fink, D. Grove, M. Hind, and P. Sweeney, Proceedings of the 2000 ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages & Applications (OOPSLA '00), pages 47--65, Oct. 2000.

More Related