Download
cs308 compiler theory n.
Skip this Video
Loading SlideShow in 5 Seconds..
CS308 Compiler Theory PowerPoint Presentation
Download Presentation
CS308 Compiler Theory

CS308 Compiler Theory

118 Views Download Presentation
Download Presentation

CS308 Compiler Theory

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. CS308 Compiler Theory CS308 Compiler Theory

  2. Syntax-Directed Translation • Grammar symbols are associated with attributes to associate information with the programming language constructs that they represent. • Values of these attributes are evaluated by the semantic rules associated with the production rules. • Evaluation of these semantic rules: • may generate intermediate codes • may put information into the symbol table • may perform type checking • may issue error messages • may perform some other activities • in fact, they may perform almost any activities. • An attribute may hold almost any thing. • a string, a number, a memory location, a complex record. CS308 Compiler Theory

  3. Syntax-Directed Definitions and Translation Schemes • When we associate semantic rules with productions, we use two notations: • Syntax-Directed Definitions • Translation Schemes • Syntax-Directed Definitions: • give high-level specifications for translations • hide many implementation details such as order of evaluation of semantic actions. • We associate a production rule with a set of semantic actions, and we do not say when they will be evaluated. • Translation Schemes: • indicate the order of evaluation of semantic actions associated with a production rule. • In other words, translation schemes give a little bit information about implementation details. CS308 Compiler Theory

  4. Syntax-Directed Definitions • A syntax-directed definition is a generalization of a context-free grammar in which: • Each grammar symbol is associated with a set of attributes. • This set of attributes for a grammar symbol is partitioned into two subsets called synthesized and inherited attributes of that grammar symbol. • Each production rule is associated with a set of semantic rules. • Semantic rules set up dependencies between attributes which can be represented by a dependency graph. • This dependency graph determines the evaluation order of these semantic rules. • Evaluation of a semantic rule defines the value of an attribute. But a semantic rule may also have some side effects such as printing a value. CS308 Compiler Theory

  5. Syntax-Directed Definition -- Example ProductionSemantic Rules L → E return print(E.val) E → E1 + T E.val = E1.val + T.val E → T E.val = T.val T → T1 * F T.val = T1.val * F.val T → F T.val = F.val F → ( E ) F.val = E.val F → digit F.val = digit.lexval • Symbols E, T, and F are associated with a synthesized attribute val. • The token digit has a synthesized attribute lexval (it is assumed that it is evaluated by the lexical analyzer). CS308 Compiler Theory

  6. Translation Schemes • In a syntax-directed definition, we do not say anything about the evaluation times of the semantic rules (when the semantic rules associated with a production should be evaluated?). • A translation scheme is a context-free grammar in which: • attributes are associated with the grammar symbols and • semantic actions enclosed between braces {} are inserted within the right sides of productions. • Ex:A → { ... } X { ... } Y { ... } Semantic Actions CS308 Compiler Theory

  7. A Translation Scheme Example • A simple translation scheme that converts infix expressions to the corresponding postfix expressions. E → T R R → + T { print(“+”) } R1 R →  T → id{ print(id.name) } a+b+c ab+c+ infix expression postfix expression CS308 Compiler Theory

  8. Type Checking • A compiler has to do semantic checks in addition to syntactic checks. • Semantic Checks • Static – done during compilation • Dynamic – done during run-time • Type checking is one of these static checking operations. • we may not do all type checking at compile-time. • Some systems also use dynamic type checking too. • A type system is a collection of rules for assigning type expressions to the parts of a program. • A type checker implements a type system. • A sound type system eliminates run-time type checking for type errors. • A programming language is strongly-typed, if every program its compiler accepts will execute without type errors. • In practice, some of type checking operations are done at run-time (so, most of the programming languages are not strongly-typed). • Ex: int x[100]; … x[i]  most of the compilers cannot guarantee that i will be between 0 and 99 CS308 Compiler Theory

  9. Intermediate Code Generation • Intermediate codes are machine independent codes, but they are close to machine instructions. • The given program in a source language is converted to an equivalent program in an intermediate language by the intermediate code generator. • Intermediate language can be many different languages, and the designer of the compiler decides this intermediate language. • syntax trees can be used as an intermediate language. • postfix notation can be used as an intermediate language. • three-address code (Quadraples) can be used as an intermediate language • we will use quadraples to discuss intermediate code generation • quadraples are close to machine instructions, but they are not actual machine instructions. • some programming languages have well defined intermediate languages. • java – java virtual machine • prolog – warren abstract machine • In fact, there are byte-code emulators to execute instructions in these intermediate languages. CS308 Compiler Theory

  10. Three-Address Code (Quadraples) • A quadraple is: x := y op z where x, y and z are names, constants or compiler-generated temporaries; op is any operator. • But we may also the following notation for quadraples (much better notation because it looks like a machine code instruction) op y,z,x apply operator op to y and z, and store the result in x. • We use the term “three-address code” because each statement usually contains three addresses (two for operands, one for the result). CS308 Compiler Theory

  11. Arrays • Elements of arrays can be accessed quickly if the elements are stored in a block of consecutive locations. A one-dimensional array A: baseA low i width baseA is the address of the first location of the array A, width is the width of each array element. low is the index of the first array element location of A[i]  baseA+(i-low)*width CS308 Compiler Theory

  12. Arrays (cont.) baseA+(i-low)*width can be re-written as i*width + (baseA-low*width) should be computed at run-time can be computed at compile-time • So, the location of A[i] can be computed at the run-time by evaluating the formula i*width+c where c is (baseA-low*width) which is evaluated at compile-time. • Intermediate code generator should produce the code to evaluate this formula i*width+c (one multiplication and one addition operation). CS308 Compiler Theory

  13. Two-Dimensional Arrays (cont.) • The location of A[i1,i2] is baseA+ ((i1-low1)*n2+i2-low2)*width baseA is the location of the array A. low1 is the index of the first row low2 is the index of the first column n2 is the number of elements in each row width is the width of each array element • Again, this formula can be re-written as ((i1*n2)+i2)*width + (baseA-((low1*n1)+low2)*width) should be computed at run-timecan be computed at compile-time CS308 Compiler Theory

  14. Multi-Dimensional Arrays • In general, the location of A[i1,i2,...,ik] is (( ... ((i1*n2)+i2) ...)*nk+ik)*width + (baseA-((...((low1*n1)+low2)...)*nk+lowk)*width) • So, the intermediate code generator should produce the codes to evaluate the following formula (to find the location of A[i1,i2,...,ik]) : (( ... ((i1*n2)+i2) ...)*nk+ik)*width+ c • To evaluate the (( ... ((i1*n2)+i2) ...)*nk+ik portion of this formula, we can use the recurrence equation: e1 = i1 em = em-1 * nm + im CS308 Compiler Theory

  15. Translation Scheme for Arrays S  L := E { if (L.offset is null) emit(‘mov’ E.place ‘,,’ L.place) else emit(‘mov’ E.place ‘,,’ L.place ‘[‘ L.offset ‘]’) } E  E1 + E2 { E.place = newtemp(); emit(‘add’ E1.place ‘,’ E2.place ‘,’ E.place) } E  ( E1 ) { E.place = E1.place; } E  L { if (L.offset is null) E.place = L.place) else { E.place = newtemp(); emit(‘mov’ L.place ‘[‘ L.offset ‘]’ ‘,,’ E.place) } } CS308 Compiler Theory

  16. translation of flow-of-control statements S  if (E) S1 | if (E) S1 else S2 | while (E) S1 | S1 S2 S.next : the label that is attached to the first three-address code to be executed after the code for S

  17. to E.true to E.true E.code E.code to E.false to E.false E.true: E.true: S1.code S1.code . . . goto S.next E.false: S2.code (a) if-then . . . (b) if-then-else S.begin: to E.true E.code to E.false E.true: S1.code goto S.begin . . . (c) while-do code for flow-of-control statements E.false: S.next: E.false:

  18. 利用fall through E  E1 relop E2 {test = E1 relop E2 s = if E.true != fall and E.false != fall then gen(‘if’ test ‘goto’, E.true) || gen(‘goto’, E.false) else if (E.true != fall) then gen(‘if’ test ‘goto’, E.true) else if (E.false != fall) then gen(‘if’ ! test ‘goto’, E.false) else ‘’ E.code := E1.code || E2 .code || s }

  19. backpatching allows generation of intermediate code in one pass (the problem with translation scheme before is that we have inherited attributes such as S.next, which is not suitable to implement in bottom-up parsers) idea: the labels (in the three-address code) will be filled when we know the places • attributes: E.truelist (true exits:真标号表), E.falselist (false exits)

  20. S  if E then M S1 {backpatch(E.truelist, M.quad); S.nextlist := merge(E.falselist, S1.nextlist} • S  if E then M1S1 N else M2S2 {backpatch(E.truelist, M1.quad); backpatch(E.falselist, M2.quad); S.nextlist := merge(S1.nextlist, N.nextlist, S2.nextlist)}

  21. Run-Time Environments • How do we allocate the space for the generated target code and the data object of our source programs? • The places of the data objects that can be determined at compile time will be allocated statically. • But the places for the some of data objects will be allocated at run-time. • The allocation and de-allocation of the data objects is managed by the run-time support package. • run-time support package is loaded together with the generate target code. • the structure of the run-time support package depends on the semantics of the programming language (especially the semantics of procedures in that language). • Each execution of a procedure is called as activation of that procedure. CS308 Compiler Theory

  22. Procedure Activations • An execution of a procedure starts at the beginning of the procedure body; • When the procedure is completed, it returns the control to the point immediately after the place where that procedure is called. • Each execution of a procedure is called as its activation. • Lifetime of an activation of a procedure is the sequence of the steps between the first and the last steps in the execution of that procedure (including the other procedures called by that procedure). • If a and b are procedure activations, then their lifetimes are either non-overlapping or are nested. • If a procedure is recursive, a new activation can begin before an earlier activation of the same procedure has ended. CS308 Compiler Theory

  23. Activation Tree (cont.) main p s s q CS308 Compiler Theory

  24. Run-Time Storage Organization Memory locations for code are determined at compile time. Locations of static data can also be determined at compile time. Data objects allocated at run-time. (Activation Records) Other dynamically allocated data objects at run-time. (For example, malloc area in C). CS308 Compiler Theory

  25. Activation Records • Information needed by a single execution of a procedure is managed using a contiguous block of storage called activation record. • An activation record is allocated when a procedure is entered, and it is de-allocated when that procedure exited. • Size of each field can be determined at compile time (Although actual location of the activation record is determined at run-time). • Except that if the procedure has a local variable and its size depends on a parameter, its size is determined at the run time. CS308 Compiler Theory

  26. Activation Records (cont.) The returned value of the called procedure is returned in this field to the calling procedure. In practice, we may use a machine register for the return value. The field for actual parameters is used by the calling procedure to supply parameters to the called procedure. The optional control link points to the activation record of the caller. The optional access link is used to refer to nonlocal data held in other activation records. The field for saved machine status holds information about the state of the machine before the procedure is called. The field of local data holds data that local to an execution of a procedure.. Temporay variables is stored in the field of temporaries. CS308 Compiler Theory

  27. Access to Nonlocal Names • Scope rules of a language determine the treatment of references to nonlocal names. • Scope Rules: • Lexical Scope (Static Scope) • Determines the declaration that applies to a name by examining the program text alone at compile-time. • Most-closely nested rule is used. • Pascal, C, .. • Dynamic Scope • Determines the declaration that applies to a name at run-time. • Lisp, APL, ... CS308 Compiler Theory

  28. Access Links program main; var a:int; procedure p; var d:int; begin a:=1; end; procedure q(i:int); var b:int; procedure s; var c:int; begin p; end; begin if (i<>0) then q(i-1) else s; end; begin q(1); end; Access Links CS308 Compiler Theory

  29. Displays • An array of pointers to activation records can be used to access • activation records. • This array is called as displays. • For each level, there will be an array entry. Current activation record at level 1 Current activation record at level 2 Current activation record at level 3 CS308 Compiler Theory

  30. Accessing Nonlocal Variables using Display D[1] D[2] D[3] addrC := offsetC(D[3]) addrB := offsetB(D[2]) addrA := offsetA(D[1]) ADD addrA,addrB,addrC program main; var a:int; procedure p; var b:int; begin q; end; procedure q(); var c:int; begin c:=a+b; end; begin p; end; CS308 Compiler Theory

  31. Issue in the Design of a Code Generator • General tasks in almost all code generators: instruction selection, register allocation and assignment. • The details are also dependent on the specifics of the intermediate representation, the target language, and the run-time system. • The most important criterion for a code generator is that it produce correct code. • Given the premium on correctness, designing a code generator so it can be easily implemented, tested, and maintained is an important design goal.

  32. Instruction Selection • The nature of the instruction set of the target machine has a strong effect on the difficulty of instruction selection. For example, • The uniformity and completeness of the instruction set are important factors. • Instruction speeds and machine idioms are another important factor. • If we do not care about the efficiency of the target program, instruction selection is straightforward. x = y + z  LD R0, y ADD R0, R0, z ST x, R0 a = b + c  LD R0, b d = a + e ADD R0, R0, c ST a, R0 LD R0, a ADD R0, R0,e ST d, R0 Redundant

  33. Register Allocation • A key problem in code generation is deciding what values to hold in what registers. • Efficient utilization is particularly important. • The use of registers is often subdivided into two subproblems: • Register Allocation, during which we select the set of variables that will reside in registers at each point in the program. • Register assignment, during which we pick the specific register that a variable will reside in. • Finding an optimal assignment of registers to variables is difficult, even with single-register machine. • Mathematically, the problem is NP-complete.

  34. A Simple Target Machine Model • Our target computer models a three-address machine with load and store operations, computation operations, jump operations, and conditional jumps. • The underlying computer is a byte-addressable machine with n general-purpose registers. • Assume the following kinds of instructions are available: • Load operations • Store operations • Computation operations • Unconditional jumps • Conditional jumps

  35. Basic Blocks and Flow Graphs • Introduce a graph representation of intermediate code that is helpful for discussing code generation • Partition the intermediate code into basic blocks • The basic blocks become the nodes of a flow graph, whose edges indicate which blocks can follow which other blocks. CS308 Compiler Theory

  36. Optimization of Basic Blocks • Local optimization within each basic block • Global optimization • which looks at how information flows among the basic blocks of a • This chapter focuses on the local optimization CS308 Compiler Theory

  37. DAG Representation of Basic Blocks • Construct a DAG for a basic block 1. There is a node in the DAG for each of the initial values of the variables appearing in the basic block. 2. There is a node Nassociated with each statement s within the block. The children of Nare those nodes corresponding to statements that are the last definitions, prior to s,of the operands used by s. 3. Node Nis labeled by the operator applied at s, and also attached to N is the list of variables for which it is the last definition within the block. 4. Certain nodes are designated output nodes. These are the nodes whose variables are live on exit from the block; that is, their values may be used later, in another block of the flow graph. CS308 Compiler Theory

  38. Finding Local Common Sub expressions • How about if b and d are live on exit? CS308 Compiler Theory

  39. Dead Code Elimination • Delete from a DAG any root (node with no ancestors) that has no live variables attached. • Repeated application of this transformation will remove all nodes from the DAG that correspond to dead code. • Example: assume a and b are live but c and e are not. • e , and then c can be deleted. CS308 Compiler Theory

  40. The Use of Algebraic Identities • Eliminate computations • Reduction in strength • Constant folding • 2*3.14 = 6.28 evaluated at compile time • Other algebraic transformations • x*y=y*x • x>y and x-y>0 • a= b+c; e=c+d+b; e=a+d; CS308 Compiler Theory

  41. Representation of Array References • x = a[i] • a[j]=y • killed node CS308 Compiler Theory

  42. Reassembling Basic Blocks From DAG 's b is not live on exit b is live on exit CS308 Compiler Theory

  43. Register and Address Descriptors • Descriptors are necessary for variable load and store decision. • Register descriptor • For each available register • Keeping track of the variable names whose current value is in that register • Initially, all register descriptors are empty • Address descriptor • For each program variable • Keeping track of the location (s) where the current value of that variable can be found • Stored in the symbol-table entry for that variable name. CS308 Compiler Theory

  44. The Code-Generation Algorithm • FunctiongetReg(I) • Selecting registers for each memory location associated with the three-address instruction I. • Machine Instructions for Operations • For a three-address instruction such as x = y + z, do the following: 1. Use getReg(x = y + z) to select registers for x, y, and z. Call these Rx, Ry, and Rz . 2 . If y is not in Ry (according to the register descriptor for Ry) , then issue an instruction LD Ry , y' , where y' is one of the memory locations for y (according to the address descriptor for y) . 3. Similarly, if z is not in Rz , issue an instruction LD Rz, z’ , where z’ is a location for z. 4. Issue the instruction ADD Rx , Ry , Rz. CS308 Compiler Theory

  45. CS308 Compiler Theory

  46. Peephole Optimization • The peephole is a small, sliding window on a program. • Peephole optimization, is done by examining a sliding window of target instructions and replacing instruction sequences within the peephole by a shorter or faster sequence, whenever possible. • Peephole optimization can be applied directly after intermediate code generation to improve the intermediate representation. CS308 Compiler Theory

  47. Eliminating Unreachable Code • An unlabeled instruction immediately following an unconditional jump may be removed. • This operation can be repeated to eliminate a sequence of instructions. CS308 Compiler Theory

  48. Flow-of- Control Optimizations • Unnecessary jumps can be eliminated in either the intermediate code or the target code by peephole optimizations. Suppose there is only one jump to L1 CS308 Compiler Theory

  49. Algebraic Simplification and Reduction in Strength • Algebraic identities can be used to eliminate three-address statements • x = x+0; x=x*1 • Reduction-in-strength transformations can be applied to replace expensive operations • x2 ; power(x, 2); x*x • Fixed-point multiplication or division; shift • Floating-point division by a constant can be approximated as multiplication by a constant CS308 Compiler Theory

  50. Use of Machine Idioms • The target machine may have hardware instructions to implement certain specific operations efficiently. • Using these instructions can reduce execution time significantly. • Example: • some machines have auto-increment and auto-decrement addressing modes. • The use of the modes greatly improves the quality of code when pushing or popping a stack as in parameter passing. • These modes can also be used in code for statements like x = x + 1 . CS308 Compiler Theory