1 / 52

Recap from last time

Recap from last time. We were trying to do Common Subexpression Elimination Compute expressions that are available at each program point. Recap from last time. in. F X := Y op Z (in) = in – { X ! * } – { * ! ... X ... } [ { X ! Y op Z | X  Y Æ X  Z}. X := Y op Z. out.

sjustice
Télécharger la présentation

Recap from last time

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recap from last time • We were trying to do Common Subexpression Elimination • Compute expressions that are available at each program point

  2. Recap from last time in FX := Y op Z(in) = in – { X ! * } – { * ! ... X ... } [ { X ! Y op Z | X  Y Æ X  Z} X := Y op Z out in FX := Y(in) = in – { X ! * } – { * ! ... X ... } [ { X ! E | Y ! E 2 in } X := Y out

  3. Example

  4. Problems • z := j * 4 is not optimized to z := x, even though x contains the value j * 4 • m := b + a is not optimized to m := i, even i contains a + b • w := 4 * m it not optimized to w := x, even though x contains the value 4 *m

  5. Problems: more abstractly • Available expressions overly sensitive to name choices, operand orderings, renamings, assignments • Use SSA: distinct values have distinct names • Do copy prop before running available exprs • Adopt canonical form for commutative ops

  6. Example in SSA in FX := Y op Z(in) = X := Y op Z out in0 in1 FX := Y(in0, in1) = X := (Y,Z) out

  7. Example in SSA in X := Y op Z FX := Y op Z(in) = in [ { X ! Y op Z } out in0 in1 FX := Y(in0, in1) = (in0Å in1 ) [ { X ! E | Y ! E 2 in0Æ Z ! E 2 in1 } X := (Y,Z) out

  8. Example in SSA

  9. Example in SSA

  10. What about pointers?

  11. What about pointers? • Option 1: don’t use SSA for point-to memory • Option 2: insert copies between SSA vars and real vars

  12. SSA helps us with CSE • Let’s see what else SSA can help us with • Loop-invariant code motion

  13. Loop-invariant code motion • Two steps: analysis and transformations • Step1: find invariant computations in loop • invariant: computes same result each time evaluated • Step 2: move them outside loop • to top if used within loop: code hoisting • to bottom if used after loop: code sinking

  14. Example

  15. Example

  16. Detecting loop invariants • An expression is invariant in a loop L iff: (base cases) • it’s a constant • it’s a variable use, all of whose defs are outside of L (inductive cases) • it’s a pure computation all of whose args are loop-invariant • it’s a variable use with only one reaching def, and the rhs of that def is loop-invariant

  17. Computing loop invariants • Option 1: iterative dataflow analysis • optimistically assume all expressions loop-invariant, and propagate • Option 2: build def/use chains • follow chains to identify and propagate invariant expressions • Option 3: SSA • like option 2, but using SSA instead of ef/use chains

  18. Example using def/use chains • An expression is invariant in a loop L iff: (base cases) • it’s a constant • it’s a variable use, all of whose defs are outside of L (inductive cases) • it’s a pure computation all of whose args are loop-invariant • it’s a variable use with only one reaching def, and the rhs of that def is loop-invariant

  19. Example using def/use chains • An expression is invariant in a loop L iff: (base cases) • it’s a constant • it’s a variable use, all of whose defs are outside of L (inductive cases) • it’s a pure computation all of whose args are loop-invariant • it’s a variable use with only one reaching def, and the rhs of that def is loop-invariant

  20. Loop invariant detection using SSA • An expression is invariant in a loop L iff: (base cases) • it’s a constant • it’s a variable use, all of whose single defs are outside of L (inductive cases) • it’s a pure computation all of whose args are loop-invariant • it’s a variable use whose single reaching def, and the rhs of that def is loop-invariant •  functions are not pure

  21. Example using SSA • An expression is invariant in a loop L iff: (base cases) • it’s a constant • it’s a variable use, all of whose single defs are outside of L (inductive cases) • it’s a pure computation all of whose args are loop-invariant • it’s a variable use whose single reaching def, and the rhs of that def is loop-invariant •  functions are not pure

  22. Example using SSA and preheader • An expression is invariant in a loop L iff: (base cases) • it’s a constant • it’s a variable use, all of whose single defs are outside of L (inductive cases) • it’s a pure computation all of whose args are loop-invariant • it’s a variable use whose single reaching def, and the rhs of that def is loop-invariant •  functions are not pure

  23. Code motion

  24. Example

  25. Lesson from example: domination restriction

  26. Domination restriction in for loops

  27. Domination restriction in for loops

  28. Avoiding domination restriction

  29. Another example

  30. Data dependence restriction

  31. Avoiding data restriction

  32. Representing control dependencies • We’ve seen SSA, a way to encode data dependencies better than just def/use chains • makes CSE easier • makes loop invariant detection easier • makes code motion easier • Now we move on to looking at how to encode control dependencies

  33. Control Dependences • A node (basic block) Y is control-dependent on another X iff X determines whether Y executes • there exists a path from X to Y s.t. every node in the path other than X and Y is post-dominated by Y • X is not post-dominated by Y

  34. Control Dependences • A node (basic block) Y is control-dependent on another X iff X determines whether Y executes • there exists a path from X to Y s.t. every node in the path other than X and Y is post-dominated by Y • X is not post-dominated by Y

  35. Example

  36. Example

  37. Control Dependence Graph • Control dependence graph: Y descendent of X iff Y is control dependent on X • label each child edge with required condition • group all children with same condition under region node • Program dependence graph: super-impose dataflow graph (in SSA form or not) on top of the control dependence graph

  38. Example

  39. Example

  40. Another example

  41. Another example

  42. Another example

  43. Summary so far • Dataflow analysis • flow functions, lattice theoretic framework, optimistic iterative analysis, MOP • Advanced Program Representations • SSA, CDG, PDG, code motion • Next: dealing with procedures

  44. Interprocedural analyses and optimizations

  45. Costs of procedure calls • Up until now, we treated calls conservatively: • make the flow function for call nodes return top • start iterative analysis with incoming edge of the CFG set to top • This leads to less precise results: “lost-precision” cost • Calls also incur a direct runtime cost • cost of call, return, argument & result passing, stack frame maintainance • “direct runtime” cost

  46. Addressing costs of procedure calls • Technique 1: try to get rid of calls, using inlining and other techniques • Technique 2: interprocedural analysis, for calls that are left

  47. Inlining • Replace call with body of callee • Turn parameter- and result-passing into assignments • do copy prop to eliminate copies • Manage variable scoping correctly • rename variables where appropriate

  48. Call graph nodes are procedures edges are calls, labelled by invocation counts/frequency Hard cases for builing call graph calls to/from external routines calls through pointers, function values, messages Where in the compiler should inlining be performed? Program representation for inlining

  49. Inlining pros and cons (discussion)

  50. Inlining pros and cons • Pros • eliminate overhead of call/return sequence • eliminate overhead of passing args & returning results • can optimize callee in context of caller and vice versa • Cons • can increase compiled code space requirements • can slow down compilation • recursion? • Virtual inlining: simulate inlining during analysis of caller, but don’t actually perform the inlining

More Related