1 / 30

What Programming Language/Compiler Researchers should Know about Computer Architecture

What Programming Language/Compiler Researchers should Know about Computer Architecture. Lizy Kurian John Department of Electrical and Computer Engineering The University of Texas at Austin. Somebody once said.

Télécharger la présentation

What Programming Language/Compiler Researchers should Know about Computer Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What Programming Language/Compiler Researchers should Know about Computer Architecture Lizy Kurian John Department of Electrical and Computer Engineering The University of Texas at Austin Lizy Kurian John, LCA, UT Austin

  2. Somebody once said “Computers are dumb actors and compilers/programmers are the master playwrights.” Lizy Kurian John, LCA, UT Austin

  3. Computer Architecture Basics • ISAs • RISC vs CISC • Assembly language coding • Datapath (ALU) and controller • Pipelining • Caches • Out of order execution Hennessy and Patterson architecture books Lizy Kurian John, LCA, UT Austin

  4. Basics • ILP • DLP • TLP • Massive parallelism • SIMD/MIMD • VLIW • Performance and Power metrics Hennessy and Patterson architecture books ASPLOS, ISCA, Micro, HPCA Lizy Kurian John, LCA, UT Austin

  5. The Bottomline Programming Language choice affects performance and power eg: Java Compilers affect Performance and Power Lizy Kurian John, LCA, UT Austin

  6. Java class file Native machine instructions Hardware bytecode translator Decode Execute Fetch Native executable bytecodes A Java Hardware Interpreter • Radhakrishnan, Ph. D 2000 (ISCA2000, ICS2001) • This technique used by Nazomi Communications, Parthus (Chicory Systems) Lizy Kurian John, LCA, UT Austin

  7. HardInt Performance • Hard-Int performs consistently better than the interpreter • In JIT mode, significant performance boost in 4 of 5 applications. Lizy Kurian John, LCA, UT Austin

  8. Compiler and Power A A E A Cycle 1 Cycle 1 B C E B C B C Cycle 2 Cycle 2 E D D D Cycle 3 Cycle 3 F F F Cycle 4 Cycle 4 DDG Peak Power = 2 Energy = 6 Peak Power = 3 Energy = 6 Lizy Kurian John, LCA, UT Austin

  9. Valluri et al 2001 HPCA workshop • Quantitative Study • Influence of state-of-the-art optimizations on energy and power of the processor examined • Optimizations studied • Standard –O1 to –O4 of DEC Alpha’s cc compiler • Four individual optimizations – simple basic-block instruction scheduling, loop unrolling, function inlining, and aggressive global scheduling Lizy Kurian John, LCA, UT Austin

  10. Standard Optimizations on Power Lizy Kurian John, LCA, UT Austin

  11. Somebody once said “Computers are dumb actors and compilers/programmers are the master playwrights.” Lizy Kurian John, LCA, UT Austin

  12. A large part of modern out of order processors is hardware that could have been eliminated if a good compiler existed. Lizy Kurian John, LCA, UT Austin

  13. Let me get more arrogant A large part of modern out of order processors was designed because computer architects thought compiler writers could not do a good job. Lizy Kurian John, LCA, UT Austin

  14. Value Prediction Is a slap on your face Shen and Lipasti Lizy Kurian John, LCA, UT Austin

  15. Value Locality • Likelihood that an instruction’s computed result or a similar predictable result will occur soon • Observation – a limited set of unique values constitute majority of values produced and consumed during execution Lizy Kurian John, LCA, UT Austin

  16. Load Value Locality Lizy Kurian John, LCA, UT Austin

  17. Causes of value locality • Data redundancy – many 0s, sparse matrices, white space in files, empty cells in spread sheets • Program constants – • Computed branches – base address for jump tables is a run-time constant • Virtual function calls – involve code to load a function pointer – can be constant Lizy Kurian John, LCA, UT Austin

  18. Causes of value locality • Memory alias resolution – compiler conservatively generates code – may contain stores that alias with loads • Register spill code – stores and subsequent loads • Convergent algorithms – convergence in parts of algorithms before global convergence • Polling algorithms Lizy Kurian John, LCA, UT Austin

  19. 2 Extremist Views Anything that can be done in hardware should be done in hardware. Anything that can be done in software should be done in software. Lizy Kurian John, LCA, UT Austin

  20. What do we need? The Dumb actor Or the The defiant actor – who pays very little attention to the script Lizy Kurian John, LCA, UT Austin

  21. Challenging all compiler writers The last 15 years was the defiant actor’s era What about the next 15? TLP, Multithreading, Parallelizing compilers – It’s time for a lot more dumb acting from the architect’s side. And it’s time for some good scriptwriting from the compiler writer’s side. Lizy Kurian John, LCA, UT Austin

  22. BACKUP Lizy Kurian John, LCA, UT Austin

  23. Compiler Optimzations • cc - Native C compiler on Dec Alpha 21064 running OSF1 operating system • gcc – Used to study the effect of individual optimizations Lizy Kurian John, LCA, UT Austin

  24. Std Optimizations Levels on cc -O0 – No optimizations performed -O1 – Local optimizations such as CSE, copy propagation, IVE etc -O2 – Inline expansion of static procedures and global optimizations such as loop unrolling, instruction scheduling -O3 – Inline expansion of global procedures -O4 – s/w pipelining, loop vectorization etc Lizy Kurian John, LCA, UT Austin

  25. Std Optimizations Levels on gcc -O0 – No optimizations performed -O1 – Local optimizations such as CSE, copy propagation, dead-code elimination etc -O2 – aggressive instruction scheduling -O3 – Inlining of procedures • Almost same optimizations in each level of cc and gcc • In cc and gcc, optimizations that increase ILP are in levels -O2, -O3, and -O4 • cc used where ever possible, gcc used used where specific hooks are required NOTE: Lizy Kurian John, LCA, UT Austin

  26. Individual Optimizations • Four gcc optimizations, all optimizations applied on top -O1 • -fschedule-insns – local register allocation followed by basic-block list scheduling • -fschedule-insns2 – Postpass scheduling done • -finline-functions – Integrated all simple functions into their callers • -funroll-loops – Perform the optimization of loop unrolling Lizy Kurian John, LCA, UT Austin

  27. Some observations • Energy consumption reduces when # of instructions is reduced, i.e., when the total work done is less, energy is less • Power dissipation is directly proportional to IPC Lizy Kurian John, LCA, UT Austin

  28. Observations (contd.) • Function inlining was found to be good for both power and energy • Unrolling was found to be good for energy consumption but bad for power dissipation Lizy Kurian John, LCA, UT Austin

  29. MMX/SIMD Automatic usage of SIMD ISA still difficult 10+ years after introduction of MMX. Lizy Kurian John, LCA, UT Austin

  30. Standard Optimizations on Power (Contd) Lizy Kurian John, LCA, UT Austin

More Related