1 / 13

Error detection in LR parsing

Error detection in LR parsing. Errors are discovered when a slot in the action table is blank. Canonical LR(1) parsers detect and report the error as soon as it is encountered LALR(1) parsers may perform a few reductions after the error has been encountered, and then detect it.

emery
Télécharger la présentation

Error detection in LR parsing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Error detection in LR parsing • Errors are discovered when a slot in the action table is blank. • Canonical LR(1) parsers detect and report the error as soon as it is encountered • LALR(1) parsers may perform a few reductions after the error has been encountered, and then detect it. • This is a result to the state merging.

  2. Error recovery in LR parsing • Phase-level recovery • Associate error routines with the empty table slots. Figure out what situation may have cause the error and make an appropriate recovery. • Panic-mode recovery • Discard symbols from the stack until a non-terminal is found. Discard input symbols until a possible lookahead for that non-terminal is found. Try to continue parsing. • Error productions • Common errors can be added as rules to the grammar.

  3. Error recovery in LR parsing • Phase-level recovery • Consider the table for grammar EE+E | id + id $ E 0 e1 s2 e1 1 1 s3 e2 accept 2 r(Eid)e3 r(Eid) 3 e1 s2 e1 4 4 r(EE+E)e2 r(EE+E) Error e1: "missing operand inserted". Recover by inserting an imaginary identifier in the stack and shifting to state 2. Error e2: "missing operator inserted". Recover by inserting an imaginary operator in the stack and shifting to state 3 Error e3: ??

  4. Error recovery in LR parsing • Error productions • Allow the parser to "recognize" erroneous input. • Example: • statement : expression SEMI | error SEMI | error NEWLINE • If the expression cannot be matched, discard everything up to the next semicolon or the next new line

  5. LR(1) grammars • Does right-recursion cause a problem in bottom-up parsing? • No, because a bottom-up parser defers reductions until it has read the whole handle. • Are these grammars LR(1)? How about LL(1)? SAa | Bb Ac Bc SAca | Bcb Ac Bc SAa | Bb AcA | a BcB | b LR(1): YES LL(1): NO LL(2): YES LR(1): NO LL(1): NO LL(2): NO LR(2): YES LR(1) : YES LL(k) : NO

  6. Where are we? • Ultimate goal: generate machine code. • Before we generate code, we must collect information about the program: • Front end • scanning (recognizing words) CHECK • parsing (recognizing syntax) CHECK • semantic analysis (recognizing meaning) • There are issues deeper than structure. Consider: int func (int x, int y); int main () { int list[5], i, j; char *str; j = 10 + 'b'; str = 8; m = func("aa", j, list[12]); return 0; } This code is syntactically correct, but will not work. What problems are there?

  7. Beyond syntax analysis • An identifier named x has been recognized. • Is x a scalar, array or function? • How big is x? • If x is a function, how many and what type of arguments does it take? • Is x declared before being used? • Where can x be stored? • Is the expression x+y type-consistent? • Semantic analysis is the phase where we collect information about the types of expressions and check for type related errors. • The more information we can collect at compile time, the less overhead we have at run time.

  8. Semantic analysis • Collecting type information may involve "computations" • What is the type of x+y given the types of x and y? • Tool: attribute grammars • CFG • Each grammar symbol has associated attributes • The grammar is augmented by rules (semantic actions) that specify how the values of attributes are computed from other attributes. • The process of using semantic actions to evaluate attributes is called syntax-directed translation. • Examples: • Grammar of declarations. • Grammar of signed binary numbers.

  9. Attribute grammars Example 1: Grammar of declarations Production Semantic rule D  T L L.in = T.type T  int T.type = integer T  char T.type = character L  L1, id L1.in = L.in addtype (id.index, L.in) L  id addtype (id.index, L.in)

  10. Attribute grammars Example 2: Grammar of signed binary numbers Production Semantic rule N  S L if (S.neg) print('-'); else print('+'); print(L.val); S  + S.neg = 0 S  – S.neg = 1 L  L1, B L.val = 2*L1.val+B.val L  B L.val = B.val B  0 B.val = 0*20 B  1 B.val = 1*20

  11. Attributes • Attributed parse tree = parse tree annotated with attribute rules • Each rule implicitly defines a set of dependences • Each attribute's value depends on the values of other attributes. • These dependences form an attribute-dependence graph. • Note: • Some dependences flow upward • The attributes of a node depend on those of its children • We call those synthesized attributes. • Some dependences flow downward • The attributes of a node depend on those of its parent or siblings. • We call those inherited attributes. • How do we handle non-local information? • Use copy rules to "transfer" information to other parts of the tree.

  12. Production Semantic rule E  E1+E2 E.val = E1.val+E2.val E  num E.val = num.yylval E  (E1) E.val = E1.val Attribute grammars attribute-dependence graph 12 E E E + E num 2 10 ) ( + E E num 7 num 3

  13. Production Semantic rule E  E1+E2 E.node = new PlusNode(E1.node,E2.node) E  num E.node = num.yylval E  (E1) E.node = E1.node Attribute grammars • We can use an attribute grammar to construct an AST • The attribute for each non-terminal is a node of the tree. • Example: • Notes: • yylval is assumed to be a node (leaf) created during scanning. • The production E  (E1) does not create a new node as it is not needed.

More Related