1 / 32

Extreme Grammaring

Introduction. Need of building a parser for VDM from a VDM-SL grammar (VooDooM project)Although parsing is a subset of a well studied area like compilers, grammars were always looked upon as the

rock
Télécharger la présentation

Extreme Grammaring

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Extreme Grammaring Development of an industrial strength ISO VDM-SL Grammar This presentation is called Extreme Grammaring and will talk about the development of an industrial strength ISO VDM-SL Grammar.This presentation is called Extreme Grammaring and will talk about the development of an industrial strength ISO VDM-SL Grammar.

    2. Introduction Need of building a parser for VDM from a VDM-SL grammar (VooDooM project) Although parsing is a subset of a well studied area like compilers, grammars were always looked upon as the “ugly duckling”. This project started because of the need of a VDM-SL parser required for the VooDooM project from a grammar. And as any other piece of software there are good and bad ways of developing it. But are there any techniques for developing grammars? Grammars are to parser generators what programming languages are to compilers. Although parsing in a subset of a well studied area like compilers, grammars were always looked upon as the "the ugly duckling". So the answer of my initial question is no... So now you ask, what we can do? In this presentation I will present a possible solution for this problem which I called “Extreme Grammaring”: a methodology for developing grammars inspired in the Extreme Programming philosophy. This methodology was used to develop a grammar for ISO VDM-SL using the SDF (Syntax Definition Formalism), one of the most common languages used for formal specification. This project started because of the need of a VDM-SL parser required for the VooDooM project from a grammar. And as any other piece of software there are good and bad ways of developing it. But are there any techniques for developing grammars? Grammars are to parser generators what programming languages are to compilers. Although parsing in a subset of a well studied area like compilers, grammars were always looked upon as the "the ugly duckling". So the answer of my initial question is no... So now you ask, what we can do? In this presentation I will present a possible solution for this problem which I called “Extreme Grammaring”: a methodology for developing grammars inspired in the Extreme Programming philosophy. This methodology was used to develop a grammar for ISO VDM-SL using the SDF (Syntax Definition Formalism), one of the most common languages used for formal specification.

    3. Background VDM: Vienna Development Method (VDM) is one of the most mature formal methods Primarily intended for formal specification and development of functional aspects of software systems. The importance of a VDM-SL Grammar: Documentation Build a parser (metric generators, language generators, ...) Before start describing the methodology, some background must be presented... What is VDM? VDM is one of the most mature formal methods. It stands for Vienna Development Method, which is supported by VDM-SL (Vienna Development Method - Specification Language). This method is primarily intended for formal specification and development of functional aspects of software systems. What is the importance of a VDM-SL Grammar: Documentation - if you have a grammar of the language you have a lot of documentation of it. This grammar can be used as a guide for developing specifications. Build a parser: parsers form the basis of all software engineering tools like metric generators, language generators, etc.). In case of the VooDooM project, this parser will be used to allow program transformation and generation from VDM-SL specifications.Before start describing the methodology, some background must be presented... What is VDM? VDM is one of the most mature formal methods. It stands for Vienna Development Method, which is supported by VDM-SL (Vienna Development Method - Specification Language). This method is primarily intended for formal specification and development of functional aspects of software systems. What is the importance of a VDM-SL Grammar: Documentation - if you have a grammar of the language you have a lot of documentation of it. This grammar can be used as a guide for developing specifications. Build a parser: parsers form the basis of all software engineering tools like metric generators, language generators, etc.). In case of the VooDooM project, this parser will be used to allow program transformation and generation from VDM-SL specifications.

    4. Starting point Previous work VDM-SL grammar in Happy + Alex Some problems State of Art (Hacking v.s. Engineering) Grammar was encoded directly Difficult to maintain/change (300 rules) Lack of tool support ... From where did I start? I had some previous work done in the previous version of VooDooM, a parser built in Happy + Alex (haskell version of yacc + lex). This parser was built from yacc grammar found in the internet. Although this approach work, it has some problems... This problems are related with the state of Art of grammar developing (Hacking v.s. Engineering). Our grammar was encoded directly to a specific tool notation (happy language). This has the inconvenient of not easily allow using the same grammar with other technology, It started to be difficult to maintain/change. It had about 300 rules, but each time something changed lots of conflicts could arise from this change. Also, changing in some part of the grammar could cause conflicts in some other part of the grammar, which makes it harder to find the productions that conflicting and how to fix it. Lack of tool support: The AST and the pretty-printer were manually derived from the happy grammar. Each time that the grammar was changed both the AST and pretty-printer must be manually modified. More, this parser can only be used built in in haskell, so there are not any other tool. ... (it’s very difficult to verify the accuracy of the grammar, and the author of the yacc grammar says that are some problems that were not solved, but he don’t specify what) From where did I start? I had some previous work done in the previous version of VooDooM, a parser built in Happy + Alex (haskell version of yacc + lex). This parser was built from yacc grammar found in the internet. Although this approach work, it has some problems... This problems are related with the state of Art of grammar developing (Hacking v.s. Engineering). Our grammar was encoded directly to a specific tool notation (happy language). This has the inconvenient of not easily allow using the same grammar with other technology, It started to be difficult to maintain/change. It had about 300 rules, but each time something changed lots of conflicts could arise from this change. Also, changing in some part of the grammar could cause conflicts in some other part of the grammar, which makes it harder to find the productions that conflicting and how to fix it. Lack of tool support: The AST and the pretty-printer were manually derived from the happy grammar. Each time that the grammar was changed both the AST and pretty-printer must be manually modified. More, this parser can only be used built in in haskell, so there are not any other tool. ... (it’s very difficult to verify the accuracy of the grammar, and the author of the yacc grammar says that are some problems that were not solved, but he don’t specify what)

    5. Principles of Grammar Engineering Introduced by Lämmel in “Towards an engineering discipline for grammaware” Start from specifications - base-line grammar Implement by customization - technology, implementation Separate concerns - modularization Enable evolution - minimize impact of changes Ensure quality - metrics, correctness Automate - traceability and scalability In the visionary paper about Grammar engineering called “Towards an engineering discipline for grammaware”, Ralf Lämmel, introduces for the first time which should be the principles of grammar engineering: Start from specifications: avoid too early commitment to a specific use case, specific technology and implementational choices. Base-line grammar. Implement by customization: grammarware should be derived from the base-line grammar by means of customization steps, which will gradually commit to some details of use case, technology and implementation. Separate concerns: advanced modularization. Enable evolution: the evolution of grammatical structure must be effectively supported by transformation techniques. Ensure quality: We need quality notions or metrics in the first place. We need to identify grammar styles, and notions of correctness and completeness. Automate: achieve traceability and scalability For a true grammar engineering discipline, in my opinion, this principles for itself are not enough. A methodology for developing grammars must be defined and techniques and tools must be implemented to support this. This methodology, as said before was inspired in Extreme Programming.In the visionary paper about Grammar engineering called “Towards an engineering discipline for grammaware”, Ralf Lämmel, introduces for the first time which should be the principles of grammar engineering: Start from specifications: avoid too early commitment to a specific use case, specific technology and implementational choices. Base-line grammar. Implement by customization: grammarware should be derived from the base-line grammar by means of customization steps, which will gradually commit to some details of use case, technology and implementation. Separate concerns: advanced modularization. Enable evolution: the evolution of grammatical structure must be effectively supported by transformation techniques. Ensure quality: We need quality notions or metrics in the first place. We need to identify grammar styles, and notions of correctness and completeness. Automate: achieve traceability and scalability For a true grammar engineering discipline, in my opinion, this principles for itself are not enough. A methodology for developing grammars must be defined and techniques and tools must be implemented to support this. This methodology, as said before was inspired in Extreme Programming.

    6. Extreme Programming The Planning Game - scope, priorities, technical estimates Small Releases - very short release cycle Metaphor - shared story Simple Design - remove complexity when found Testing - continuous unit testing, test-first design Refactoring - restructure without functionality changes Pair Programming - two programmers one machine Collective Ownership - change each others code (anytime) Continuous Integration - build and test 40-Hour Week - work more = produce less On-Site Customer - user in team Coding Standards - no irrelevant personal preferences So what is Extreme Programming? Extreme Programming is a methodology for software development that tries to apply all the good software engineering techniques. It defines twelve “golden rules”: The Planning Game: Quickly determine the scope of the next release by combining business priorities and technical estimates. As reality overtakes the plan, update the plan. Small Releases: Put a simple system into production quickly, then release new versions on a very short cycle. Metaphor: Guide all development with a simple shared story of how the whole system works. Simple Design: The system should be designed as simply as possible at any given moment. Extra complexity is removed as soon as it is discovered. Testing: Programmers continually write unit tests, which must run flawlessly for development to continue. Customers write tests demonstrating that features are finished. Refactoring: Programmers restructure the system without changing its behavior to remove duplication, improve communication, simplify, or add flexibility. Pair Programming: All production code is written with two programmers at one machine. Collective Ownership: Anyone can change any code anywhere in the system at any time. Continuous Integration: Integrate and build the system many times a day, every time a task is completed. 40-Hour Week: Work no more than 40 hours a week as a rule. Never work overtime a second week in a row. On-Site Customer: Include a real, live user on the team, available full-time to answer questions. Coding Standards: Programmers write all code in accordance with rules emphasizing communication through the code. So what is Extreme Programming? Extreme Programming is a methodology for software development that tries to apply all the good software engineering techniques. It defines twelve “golden rules”: The Planning Game: Quickly determine the scope of the next release by combining business priorities and technical estimates. As reality overtakes the plan, update the plan. Small Releases: Put a simple system into production quickly, then release new versions on a very short cycle. Metaphor: Guide all development with a simple shared story of how the whole system works. Simple Design: The system should be designed as simply as possible at any given moment. Extra complexity is removed as soon as it is discovered. Testing: Programmers continually write unit tests, which must run flawlessly for development to continue. Customers write tests demonstrating that features are finished. Refactoring: Programmers restructure the system without changing its behavior to remove duplication, improve communication, simplify, or add flexibility. Pair Programming: All production code is written with two programmers at one machine. Collective Ownership: Anyone can change any code anywhere in the system at any time. Continuous Integration: Integrate and build the system many times a day, every time a task is completed. 40-Hour Week: Work no more than 40 hours a week as a rule. Never work overtime a second week in a row. On-Site Customer: Include a real, live user on the team, available full-time to answer questions. Coding Standards: Programmers write all code in accordance with rules emphasizing communication through the code.

    7. Extreme Grammaring The Planning Game - scope, priorities, technical estimates Small Releases - very short release cycle Metaphor - shared story Simple Design - remove complexity when found Testing - continuous unit testing, test-first design Refactoring - restructure without functionality changes Pair Programming - two programmers one machine Collective Ownership - change each others code (anytime) Continuous Integration - build and test 40-Hour Week - work more = produce less On-Site Customer - user in team Coding Standards - no irrelevant personal preferences Now from all these rules what can we apply for Extreme Grammaring? From all these we definitely want the ones that are green. The red one does not make any sense and the yellow ones may or may not apply. So let’s review each one of them in the Extreme Grammaring philosophy: The Planning Game: The scope of the grammar is very important. We can define the scope as the base of the language or the most important extensions of it. The priorities can be defined as which part of the language is more important to disambiguate, and technical estimates can set time units for each task to accomplish. Small Releases: small releases is very important. In this project I defined 3 releases as we will see next. Metaphor: a shared story for grammar developing could be develop a grammar for some language to be used in some sort of tool... Probably this makes sense when developing large software systems, but when developing grammars the shared story will be always the same. Simple Design: What is simple design in grammar developing? Removing injections can be considered as removing complexity from a grammar, and some solutions for expressing rules are simpler than other... But this topic will be skipped for future work. Testing: Testing is the most important part of all methodology, and this is the part I will focus most. Refactoring: restructure grammars without making semantically changes. Pair Programming: I did this work myself, but definitely could be made by having two persons in one machine. Collective Ownership: This can only be applied in case of pair programming, so here the notion of collective ownership will not be mentioned. Continuous Integration: Integration is testing in the sense we are testing with the rest of the work done. 40-Hour Week: why not apply this too? It is proved that is true! :-) On-Site Customer - In this case I don’t have “on-site costumer” but then, this is not for end-users but to be used as component. Coding Standards: There are not any kind of work about grammar coding standards but some will be introduced here. So, as we see not all the 12 golden rules of the Extreme Programming can be easily applied. So I will focus on the more important or innovative: “The Planning Game”, “Small Releases”, “Testing”, “Refactoring”, “Continuous Integration” and “Coding Standards”.Now from all these rules what can we apply for Extreme Grammaring? From all these we definitely want the ones that are green. The red one does not make any sense and the yellow ones may or may not apply. So let’s review each one of them in the Extreme Grammaring philosophy: The Planning Game: The scope of the grammar is very important. We can define the scope as the base of the language or the most important extensions of it. The priorities can be defined as which part of the language is more important to disambiguate, and technical estimates can set time units for each task to accomplish. Small Releases: small releases is very important. In this project I defined 3 releases as we will see next. Metaphor: a shared story for grammar developing could be develop a grammar for some language to be used in some sort of tool... Probably this makes sense when developing large software systems, but when developing grammars the shared story will be always the same. Simple Design: What is simple design in grammar developing? Removing injections can be considered as removing complexity from a grammar, and some solutions for expressing rules are simpler than other... But this topic will be skipped for future work. Testing: Testing is the most important part of all methodology, and this is the part I will focus most. Refactoring: restructure grammars without making semantically changes. Pair Programming: I did this work myself, but definitely could be made by having two persons in one machine. Collective Ownership: This can only be applied in case of pair programming, so here the notion of collective ownership will not be mentioned. Continuous Integration: Integration is testing in the sense we are testing with the rest of the work done. 40-Hour Week: why not apply this too? It is proved that is true! :-) On-Site Customer - In this case I don’t have “on-site costumer” but then, this is not for end-users but to be used as component. Coding Standards: There are not any kind of work about grammar coding standards but some will be introduced here. So, as we see not all the 12 golden rules of the Extreme Programming can be easily applied. So I will focus on the more important or innovative: “The Planning Game”, “Small Releases”, “Testing”, “Refactoring”, “Continuous Integration” and “Coding Standards”.

    8. The Planning Game Scope Follow strictly the ISO VDM-SL grammar spec Priorities Disambiguate types Disambiguate full grammar Tree construction Technical estimates Not defined... In the “Planning Game” one of the first thing to define is what is the scope of the work. In the development of the VDM-SL grammar the scope was to follow strictly the ISO specification. No extensions will be covered here (like module extensions or even IFAD extensions: VDM-SL and VPP). The priorities were defined as follows: Disambiguate the data types, because this is the part that the VooDooM project cares most. Disambiguate the full grammar, make an effort in trying to parse all the available source code Finally, tree construction, add constructors for creation of an abstract syntax tree of the parsing tree. At least, the technical estimates. In this case, this step was skipped. The reason for this, is simply that I didn’t have any previous experience that could allow me to estimate how much time I would need for each task.In the “Planning Game” one of the first thing to define is what is the scope of the work. In the development of the VDM-SL grammar the scope was to follow strictly the ISO specification. No extensions will be covered here (like module extensions or even IFAD extensions: VDM-SL and VPP). The priorities were defined as follows: Disambiguate the data types, because this is the part that the VooDooM project cares most. Disambiguate the full grammar, make an effort in trying to parse all the available source code Finally, tree construction, add constructors for creation of an abstract syntax tree of the parsing tree. At least, the technical estimates. In this case, this step was skipped. The reason for this, is simply that I didn’t have any previous experience that could allow me to estimate how much time I would need for each task.

    9. Small Releases Programmed releases (completed): 0.0.1 - Grammar typed from standard 0.0.2 - Disambiguated grammar 0.0.3 - AST construction Future Releases 1.0 - Haskell front-end (finished - 29-12-2004) 2.0 - Java front-end For this work, three short-term releases were programmed: 0.0.1 - A SDF grammar typed from the standard ISO specification. Because we use generalized LR parsing, it is possible to verify errors in the grammar, as we will see next. 0.0.2 - Grammar completely disambiguated. With this grammar should be possible to parse without any ambiguity all the available source code. 0.0.3 - Grammar with constructs for creation of the AST. And two middle-term releases were programmed: 1.0 - Haskell front-end. At the time of the presentation this version was at 80% of completion and it was finished on December, 29. 2.0 - Java front-end. This is planned but there is not an official date for releasing it (only if there is enough interest in this).For this work, three short-term releases were programmed: 0.0.1 - A SDF grammar typed from the standard ISO specification. Because we use generalized LR parsing, it is possible to verify errors in the grammar, as we will see next. 0.0.2 - Grammar completely disambiguated. With this grammar should be possible to parse without any ambiguity all the available source code. 0.0.3 - Grammar with constructs for creation of the AST. And two middle-term releases were programmed: 1.0 - Haskell front-end. At the time of the presentation this version was at 80% of completion and it was finished on December, 29. 2.0 - Java front-end. This is planned but there is not an official date for releasing it (only if there is enough interest in this).

    10. Testing White box Structural testing Full visibility into how system works Black box Functional or behavioral testing Only the interface with exterior is available Testing, as said before, is the central part of this talk. So what is testing? Testing can be divided in two main groups: white box and black box. White box testing, is also called structural or unit testing. It have full visibility into how system works, testing specific parts of the system. Black box testing, that is also called function or behavioral testing. It only have access to the interface with the exterior.Testing, as said before, is the central part of this talk. So what is testing? Testing can be divided in two main groups: white box and black box. White box testing, is also called structural or unit testing. It have full visibility into how system works, testing specific parts of the system. Black box testing, that is also called function or behavioral testing. It only have access to the interface with the exterior.

    11. Grammar Unit Testing Unit test Test a single method, function, etc... Different types of unit tests: Parsing (succeeds, fails) Well-formness of the tree Test suite Combination of all unit test Unit testing was first applied to smalltalk the first object-oriented programming language. Thus, tests were for a single method. As this methodologies were applied to other paradigms, the tests were applied to different parts of the programs, like function. In case of grammar engineering, unit tests are applied to small parts of the language. So, what types of unit tests exist? Parsing, we can test a small part of the grammar to parse or not to parse (succeeds and fails, respectively) Well-formness of the tree, if the parsing tree is getting the exact shape we pretend (if some rules are grouped with left or right associatively or if some rules have greater precedence than others (SHOW AN UNIT TEST EXAMPLE) As you could see, I defined almost 30 unit-tests for the expression production. The set of all tests is called test-suite. Running the test-suite allows us to check if all the unit-tests are correct. (SHOW TEST-SUITE RUNNING) Unit testing was first applied to smalltalk the first object-oriented programming language. Thus, tests were for a single method. As this methodologies were applied to other paradigms, the tests were applied to different parts of the programs, like function. In case of grammar engineering, unit tests are applied to small parts of the language. So, what types of unit tests exist? Parsing, we can test a small part of the grammar to parse or not to parse (succeeds and fails, respectively) Well-formness of the tree, if the parsing tree is getting the exact shape we pretend (if some rules are grouped with left or right associatively or if some rules have greater precedence than others (SHOW AN UNIT TEST EXAMPLE) As you could see, I defined almost 30 unit-tests for the expression production. The set of all tests is called test-suite. Running the test-suite allows us to check if all the unit-tests are correct. (SHOW TEST-SUITE RUNNING)

    12. Test Coverage Rule coverage Introduced by Purdon (1971) Explores all rules of a grammar Simple measure but doesn’t cover all cases Context-dependent rule coverage Introduced by Lämmel in “Grammar Testing” Generalization of the above in which the context is taken in account No known implementations Test coverage was first introduced by Purdon in 1971. The test coverage for a grammar is simply exploring all rules of a grammar. This is a very simple measure, but doesn’t cover all cases... Example: T S -> Top A -> T A -> S “a” -> A “b” -> A The coverage of “a” “b” is 100% although it doesn’t coverage all possible cases. For covering all possible cases of a grammar one need a context-dependent rule coverage. This concept was first introduced by Ralf Lämmel in his paper “Grammar testing”. This is a generalization of the above definition in which the context is taken in account. There are not any know implementations...Test coverage was first introduced by Purdon in 1971. The test coverage for a grammar is simply exploring all rules of a grammar. This is a very simple measure, but doesn’t cover all cases... Example: T S -> Top A -> T A -> S “a” -> A “b” -> A The coverage of “a” “b” is 100% although it doesn’t coverage all possible cases. For covering all possible cases of a grammar one need a context-dependent rule coverage. This concept was first introduced by Ralf Lämmel in his paper “Grammar testing”. This is a generalization of the above definition in which the context is taken in account. There are not any know implementations...

    13. Test Coverage Metrics KP - Kernel Productions KPr - Kernel Priorities S - States RSa - Rule Size average RSm - Rule Size maximum RC - Rule Coverage Percentage For computing test coverage, a tool called SdfCoverage was built by Joost Visser. After running test coverage on the three releases using the available source code we got the following values. The source code used changed during different version numbers. Kernel Productions - Number of non-reject productions in the normalized SDF grammar Kernel Priorities - Number of priorities in the normalized SDF grammar (When the grammar is normalized priorities are added, for instance in A+ case). States - Number of states in the parse table RSa - Rule Size average Average size of the LHS of productions RSm - Rule Size maximum Maximum size of the LHS of productions Rule Coverage - Percentage of kernel productions used in the test suite The really interesting metrics are: KP, KPr and RC. We can see that the value of Kernel Productions (KP) decreases for each release. In the 0.0.2 this happens, because some injections were removed (non-terminals were replaced by its definition) in order to apply the disambiguation filters. In the 0.0.3 version this happens because the grammar was refactored (more injections were removed) to create a nicer AST. The KPr is lower in the 0.0.1 version, increases dramatically in 0.0.2 and then stables. The number of KPr in the 0.0.1 was not expected, because no disambiguation filter existed in this version, although it seems that sdf2table during the step of normalization adds “some” priorities. In the 0.0.2 version, the whole grammar was disambiguated, so this value makes sense and in the 0.0.3 version no disambiguation rule was add. The Rule coverage values are very interesting. In the 0.0.1 version 48% of the grammar was not covered (almost half of the grammar). This means, that the possibility that ambiguities still exist is very high. We can also observe a small increase of coverage. This happens because, during the disambiguation process examples were add. What is not very clear is why the coverage increases from 0.0.2 to 0.0.3 when the tests don’t change. This can be observed more clearly next.For computing test coverage, a tool called SdfCoverage was built by Joost Visser. After running test coverage on the three releases using the available source code we got the following values. The source code used changed during different version numbers. Kernel Productions - Number of non-reject productions in the normalized SDF grammar Kernel Priorities - Number of priorities in the normalized SDF grammar (When the grammar is normalized priorities are added, for instance in A+ case). States - Number of states in the parse table RSa - Rule Size average Average size of the LHS of productions RSm - Rule Size maximum Maximum size of the LHS of productions Rule Coverage - Percentage of kernel productions used in the test suite The really interesting metrics are: KP, KPr and RC. We can see that the value of Kernel Productions (KP) decreases for each release. In the 0.0.2 this happens, because some injections were removed (non-terminals were replaced by its definition) in order to apply the disambiguation filters. In the 0.0.3 version this happens because the grammar was refactored (more injections were removed) to create a nicer AST. The KPr is lower in the 0.0.1 version, increases dramatically in 0.0.2 and then stables. The number of KPr in the 0.0.1 was not expected, because no disambiguation filter existed in this version, although it seems that sdf2table during the step of normalization adds “some” priorities. In the 0.0.2 version, the whole grammar was disambiguated, so this value makes sense and in the 0.0.3 version no disambiguation rule was add. The Rule coverage values are very interesting. In the 0.0.1 version 48% of the grammar was not covered (almost half of the grammar). This means, that the possibility that ambiguities still exist is very high. We can also observe a small increase of coverage. This happens because, during the disambiguation process examples were add. What is not very clear is why the coverage increases from 0.0.2 to 0.0.3 when the tests don’t change. This can be observed more clearly next.

    14. Test Coverage Metrics (2) Although the “Generics” test-suite does not change the coverage gets lower (Injections, total nr rules) The “expressions” and “functiontypes” were only added in 0.0.2 version. In this table we show the coverage values per test-suite. In the 0.0.1 version only the generics test-suite existed. This test suite was built with real world examples downloaded from internet. This test-suite is fixed and did not change. The expressions and functiontypes test suite were only add in the 0.0.2 version and also did not change on 0.0.3 version. So how can we explain that the generics test-suite does not change but the coverage gets lower? The explanation for this is simple: injections and total number of rules. As we saw before for each version we have a decrease in the number of rules. As we removed the injections less rules were covered what explains the decrease in rule coverage. In this table we show the coverage values per test-suite. In the 0.0.1 version only the generics test-suite existed. This test suite was built with real world examples downloaded from internet. This test-suite is fixed and did not change. The expressions and functiontypes test suite were only add in the 0.0.2 version and also did not change on 0.0.3 version. So how can we explain that the generics test-suite does not change but the coverage gets lower? The explanation for this is simple: injections and total number of rules. As we saw before for each version we have a decrease in the number of rules. As we removed the injections less rules were covered what explains the decrease in rule coverage.

    15. Refactoring Semantic preserving transformations Study made by Lämmel in “Grammar Adaptation” Operators: preserve - replace a phrase by an equivalent fold - replace for its definition unfold - extract definition introduce - introduction of non-terminals eliminate - elimination of non-terminals rename - renaming non-terminals Now that we talk about testing, we follow for the next topic of our Extreme Grammaring list: Refactoring. Refactoring is transformation while preserving semantics. This concept was first applied to grammars by Ralf Lämmler in his paper “Grammar Adaptation” . In his work he defines six refactoring operators: preserve - for replacing a phrase by an equivalent fold - replace a non-terminal for its definition unfold - extract a definition of a rule and create a new non-terminal introduce - introduction of non-terminals in a rule eliminate - elimination of non-terminals rename - renaming non-terminalsNow that we talk about testing, we follow for the next topic of our Extreme Grammaring list: Refactoring. Refactoring is transformation while preserving semantics. This concept was first applied to grammars by Ralf Lämmler in his paper “Grammar Adaptation” . In his work he defines six refactoring operators: preserve - for replacing a phrase by an equivalent fold - replace a non-terminal for its definition unfold - extract a definition of a rule and create a new non-terminal introduce - introduction of non-terminals in a rule eliminate - elimination of non-terminals rename - renaming non-terminals

    16. Continuous Integration The integration test suite is a set of generic real world examples Only 52% coverage Examples are difficult to find Most of the examples use language extensions Examples: Found on internet Used a pre-processor for extracting code from literal programs. After each change, an integration test was run. This integration test was build with a set of real world examples. After measuring coverage we found out that this integration test only covered 52% of the grammar. This is fairly small, but this was due the difficulty of finding examples and that most examples had IFAD language extensions which must be changed manually to comply the ISO and therefore be used. This examples were found on the internet but within latex code (literate VDM). For extracting the VDM code from literal VDM a little pre-processor was developed.After each change, an integration test was run. This integration test was build with a set of real world examples. After measuring coverage we found out that this integration test only covered 52% of the grammar. This is fairly small, but this was due the difficulty of finding examples and that most examples had IFAD language extensions which must be changed manually to comply the ISO and therefore be used. This examples were found on the internet but within latex code (literate VDM). For extracting the VDM code from literal VDM a little pre-processor was developed.

    17. Code Standards Nothing found about the subject The following can be applied: Limiting the number of children in a rule Limiting the number of alternatives in a rule Prefer some sort of constructs than other Convention for the non-terminal names Convention for syntax specification Limit module size ... It’s somehow difficult to talk in coding standards for grammars. I was not able to find any information on the web about grammar coding standards. But it is not very hard to imagine a few and the following can be applied: Limiting the number of children (non-terminals) in a rule Limiting the number of alternatives in a rule (for instance a production should not have more than 5 alternative rules) Prefer some sort of constructs than other. (SHOW EXAMPLE: reject, prefer and avoid constructs) Convention for non-terminal names (SHOW EXAMPLE: in our grammar non-terminals used the camel convention upper and lower cases, for instance: CaseExpression) Convention for syntax specification (SHOW EXAMPLE: in some cases syntax was specified in a character class, and regular expressions and in some cases it was expressed using rules. Limit module size (for instance a module should not have more than 50 lines) Etc... Minimizing the number of the children in a rule (número de não terminais por regra) Modules (for instance, module for non-terminal, modules can not be bigger than 100 lines) Prefer some sort of constructs than other Convention for the non-terminal names. (SHOW EXAMPLES IN THE ISO VDM-SL GRAMMAR)It’s somehow difficult to talk in coding standards for grammars. I was not able to find any information on the web about grammar coding standards. But it is not very hard to imagine a few and the following can be applied: Limiting the number of children (non-terminals) in a rule Limiting the number of alternatives in a rule (for instance a production should not have more than 5 alternative rules) Prefer some sort of constructs than other. (SHOW EXAMPLE: reject, prefer and avoid constructs) Convention for non-terminal names (SHOW EXAMPLE: in our grammar non-terminals used the camel convention upper and lower cases, for instance: CaseExpression) Convention for syntax specification (SHOW EXAMPLE: in some cases syntax was specified in a character class, and regular expressions and in some cases it was expressed using rules. Limit module size (for instance a module should not have more than 50 lines) Etc... Minimizing the number of the children in a rule (número de não terminais por regra) Modules (for instance, module for non-terminal, modules can not be bigger than 100 lines) Prefer some sort of constructs than other Convention for the non-terminal names. (SHOW EXAMPLES IN THE ISO VDM-SL GRAMMAR)

    18. Technological Alternatives Most parser technologies are too restrictive Lex + Yacc uses LALR(1) ANTLR uses LL(K) Have other problems, like: Lexical, context-free & abstract syntax separated Difficult to disambiguate (left-recursive “demon”) Grammars are technology dependent Solution: Generalized LR Parsing using SDF Grammars Most parser technologies are too restrictive. Probably most are aware of the difficulty of the development of a grammar using lex + yacc (or alex and happy in case of haskell), or antlr. These technologies also have other problems: Lexical, context-free and abstract syntax are treated as completely different things which makes it hard to specify any kind of grammar. Disambiguation usually is solved by using some dark magic reshaping the tree and can not deal with the left-recursive demon of the grammars. Grammars are very dependent of the used technology. Writing a grammar as it is specified (normally using a ?BNF formalism) just don’t work. The grammars must be massaged a lot before starting to parse anything and the result of this grammar massaging leads to unreadable and harder to maintain grammars.Most parser technologies are too restrictive. Probably most are aware of the difficulty of the development of a grammar using lex + yacc (or alex and happy in case of haskell), or antlr. These technologies also have other problems: Lexical, context-free and abstract syntax are treated as completely different things which makes it hard to specify any kind of grammar. Disambiguation usually is solved by using some dark magic reshaping the tree and can not deal with the left-recursive demon of the grammars. Grammars are very dependent of the used technology. Writing a grammar as it is specified (normally using a ?BNF formalism) just don’t work. The grammars must be massaged a lot before starting to parse anything and the result of this grammar massaging leads to unreadable and harder to maintain grammars.

    19. Supporting the Methodology SDF - Syntax Definition Formalism Purely declarative Very expressive with natural and concise syntax Modular structure Supported by Scannerless Generalized LR Parsing Supports compositional grammars Allows parsing with ambiguities (allows earlier testing) Disambiguation is separated from the grammar using priority and associative rules All this methodology is very nice, but how we support this? For that I will use SDF, that stands for Syntax Definition Formalism. Its purely declarative because it only allows you to specify grammar rules (and nothing else like semantic actions consisting of procedural code like yacc). Is very expressive with natural and concise syntax. The syntax is very close to BNF (except the direction of the arrows) which is the “standard” language for defining a grammar. It allows modular structure, to separate concerns (like it was required in the principles of grammar engineering). It is supported by Scannerless Generalized LR parsing which: Supports composition grammars (if we combine two generalized LR grammars we have another generalized grammar) Allows parsing with ambiguities, which allows earlier testing. For instance I started to test the grammar right after typing it from the ISO. This allowed me to verify type errors and other as we will see next. Disambiguation is separated from the grammar using priority and associative rules which allows, better readability and better modularity (once more separating concerns)All this methodology is very nice, but how we support this? For that I will use SDF, that stands for Syntax Definition Formalism. Its purely declarative because it only allows you to specify grammar rules (and nothing else like semantic actions consisting of procedural code like yacc). Is very expressive with natural and concise syntax. The syntax is very close to BNF (except the direction of the arrows) which is the “standard” language for defining a grammar. It allows modular structure, to separate concerns (like it was required in the principles of grammar engineering). It is supported by Scannerless Generalized LR parsing which: Supports composition grammars (if we combine two generalized LR grammars we have another generalized grammar) Allows parsing with ambiguities, which allows earlier testing. For instance I started to test the grammar right after typing it from the ISO. This allowed me to verify type errors and other as we will see next. Disambiguation is separated from the grammar using priority and associative rules which allows, better readability and better modularity (once more separating concerns)

    20. SDF - Technology Parsing sdf2table, sglr Testing test-unit, ambtracker, SdfCoverage Tree visualization tree2graph, graph2dot Transformation trm2baf, implodePT Haskell Generation Sdf2Haskell (AST, Pretty Printer) SDF is also supported by a vast number of tools. For instance: Parsing - supported by “sdf2table”, that creates a parsing table from a grammar and “sglr” that uses that parsing table for parsing. Testing - supported by “test-unit”, the framework for automating the execution of test. “ambtracker” that allows to visualize the ambiguities between rules and “SdfCoverage” to measure test coverage of a grammar. Tree Visualization - supported by “tree2graph” and “graph2dot” which allows to convert a parsing tree from its internal format to “dot” format so it can be visualized with the Graphivz framework. Transformation tools - suppored by “trm2baf” and “implodepT” And, at least Haskell generation “Sdf2Haskell” that creates an Haskell data type representation of the grammar abstract syntax tree and it generates automatically a pretty-printer for it.SDF is also supported by a vast number of tools. For instance: Parsing - supported by “sdf2table”, that creates a parsing table from a grammar and “sglr” that uses that parsing table for parsing. Testing - supported by “test-unit”, the framework for automating the execution of test. “ambtracker” that allows to visualize the ambiguities between rules and “SdfCoverage” to measure test coverage of a grammar. Tree Visualization - supported by “tree2graph” and “graph2dot” which allows to convert a parsing tree from its internal format to “dot” format so it can be visualized with the Graphivz framework. Transformation tools - suppored by “trm2baf” and “implodepT” And, at least Haskell generation “Sdf2Haskell” that creates an Haskell data type representation of the grammar abstract syntax tree and it generates automatically a pretty-printer for it.

    21. Syntax Definition Formalism Optional: “?” Repetition: “*”, “+” Simple, e.g.: Identifier 2+ With separators, e.g.: { Indentifier “,”}+ Alternative: “|” NOTA: adicionar mais alguma coisa sobre a notação... Talvez definir quais as partes que uma gramática sdf tem: context-free rules, syntax, ...NOTA: adicionar mais alguma coisa sobre a notação... Talvez definir quais as partes que uma gramática sdf tem: context-free rules, syntax, ...

    22. SDF - Example

    23. Setting up the bases Hard copy of the ISO VDM-SL standard (ISO/IEC 13817-1) Initial test suite Real world examples (loc: 1970) Exercises from Formal Methods course (loc: 507) Software: CVS to keep track of all changes parse-unit (sdf unit testing tool) Sdf2 software bundle (sdf2table, sglr) SdfCoverage Starndard unix tools (text editor, make, ...) Setting up the bases: what did I need for starting this project? A hard copy of the ISO VDM-SL standard An initial test suite: Real world examples found on internet with about 2k lines of code Exercises from Formal Methods Course of University of Minho with 507 lines of code. Software: CVS to keep track of all changes of the grammar and of the project. (For each change a new commit was made in the repository. The development of the project needed almost 50 versions of the grammar) “parse-unit” for testing the SDF grammar Sdf2 software bundle with all the tools that support SDF. SdfCoverage tool for measuring coverage. And the standard unix tools, like a text editor, make, etc...Setting up the bases: what did I need for starting this project? A hard copy of the ISO VDM-SL standard An initial test suite: Real world examples found on internet with about 2k lines of code Exercises from Formal Methods Course of University of Minho with 507 lines of code. Software: CVS to keep track of all changes of the grammar and of the project. (For each change a new commit was made in the repository. The development of the project needed almost 50 versions of the grammar) “parse-unit” for testing the SDF grammar Sdf2 software bundle with all the tools that support SDF. SdfCoverage tool for measuring coverage. And the standard unix tools, like a text editor, make, etc...

    24. Development cycle Initial grammar Correction Correct grammar rules Correct test suite Disambiguation Add filters Change grammar shape Steps 2 and 3 should make heavy use of testing This is a simplified development cycle which can be divide in three parts: The initial grammar, that was obtained by typing the grammar specification to SDF. Grammar correction: After obtaining the first version of the grammar this was tested against the available source code. Because we use generalized LR parsing, it is possible to parse any document without disambiguate the grammar. If it does not parse then one of the following problems are possible or there is a type error in the grammar (something is missing or not correct) or we trying to process examples with errors (language extensions). Test and correction are applied iteratively. Grammar disambiguation: After being able to parse all the source files, we can start to disambiguate the grammar. This is done by adding disambiguation filters and some times changing the grammar shape. As we can see, steps 2 and 3 make heavy use of testing. So let’s see more accurately how should it be accomplished Grammar correction and disambiguation.This is a simplified development cycle which can be divide in three parts: The initial grammar, that was obtained by typing the grammar specification to SDF. Grammar correction: After obtaining the first version of the grammar this was tested against the available source code. Because we use generalized LR parsing, it is possible to parse any document without disambiguate the grammar. If it does not parse then one of the following problems are possible or there is a type error in the grammar (something is missing or not correct) or we trying to process examples with errors (language extensions). Test and correction are applied iteratively. Grammar disambiguation: After being able to parse all the source files, we can start to disambiguate the grammar. This is done by adding disambiguation filters and some times changing the grammar shape. As we can see, steps 2 and 3 make heavy use of testing. So let’s see more accurately how should it be accomplished Grammar correction and disambiguation.

    25. Grammar correction Isolate problem Source location Grammar rules involved Correct grammar Change syntax (test suite) Run to verify test succeeds Run entire test battery Commit Change Document change in message The first thing to do is isolate the problem. Try to figure out what in the source files could reside the problem. And then, which grammar rules are involved. After isolating the problem we know if the problem is on the source file (due language extensions) or in the grammar syntax (due a typo mistake). Then correct either the test suite or the grammar syntax. Run the example that failed to verify that the right correction was made Run the entire test battery to verify that change didn’t break anything else. Commit the change to the CVS repository Document the change in a messageThe first thing to do is isolate the problem. Try to figure out what in the source files could reside the problem. And then, which grammar rules are involved. After isolating the problem we know if the problem is on the source file (due language extensions) or in the grammar syntax (due a typo mistake). Then correct either the test suite or the grammar syntax. Run the example that failed to verify that the right correction was made Run the entire test battery to verify that change didn’t break anything else. Commit the change to the CVS repository Document the change in a message

    26. Grammar Disambiguation Isolate problem Source location Grammar rules involved Create unit test Captures error Run to guarantee this Correct grammar Add disambiguation filter (change syntax) Run to verify unit test succeeds Run entire test battery Commit Change Document change in message The grammar disambiguation follows the same approach as Grammar correction but with an extra step. Isolate the problem Source code location will allow to extract the piece of code where the ambiguity is Grammar rules involved will allow to know what needs to be changed in order to correct the error. Create a unit test for that error: With the extracted source code, the error is captured in a single file Run the unit test to verify that error was properly captured. Correct the grammar: Add a disambiguation filter (change syntax. Changing the syntax is sometimes required, because this filters can not handle injections or chain rules and the syntax must be changed so the filters can be properly applied). Run unit test to verify that the error was properly corrected otherwise rollback the change and try another correction Run entire test battery (the whole test suite) to verify that the change made didn’t break anything else. Commit the change made in the CVS repository Document change in messageThe grammar disambiguation follows the same approach as Grammar correction but with an extra step. Isolate the problem Source code location will allow to extract the piece of code where the ambiguity is Grammar rules involved will allow to know what needs to be changed in order to correct the error. Create a unit test for that error: With the extracted source code, the error is captured in a single file Run the unit test to verify that error was properly captured. Correct the grammar: Add a disambiguation filter (change syntax. Changing the syntax is sometimes required, because this filters can not handle injections or chain rules and the syntax must be changed so the filters can be properly applied). Run unit test to verify that the error was properly corrected otherwise rollback the change and try another correction Run entire test battery (the whole test suite) to verify that the change made didn’t break anything else. Commit the change made in the CVS repository Document change in message

    27. Grammar Metrics Simple metrics Total Number of Terminals (AVG per rule) Total Number of Non-terminals (AVG per rule) Complex metrics Introduced by Malloy in “A metrics suite for grammar-base software” McCabe Cyclomatic complexity Halstead Effort ... Grammar metrics is one of the important topics of “Principles of Grammar Engineering”. Some simple metrics can be easily obtained Total number of terminals (and its average per rule) Total number of non-terminals (and its average per rule) Or even the average ratio of terminals and non-terminals per rule Or complex metrics. In the paper “A metrics suite for grammar-base software” Malloy introduce many complex metrics. For instance: McCabe (cyclomatic complexity) Halstead Effort Others (like grammar impurity) A tool called “SdfMetz” is going to be developed to implement all this metrics for SDF.Grammar metrics is one of the important topics of “Principles of Grammar Engineering”. Some simple metrics can be easily obtained Total number of terminals (and its average per rule) Total number of non-terminals (and its average per rule) Or even the average ratio of terminals and non-terminals per rule Or complex metrics. In the paper “A metrics suite for grammar-base software” Malloy introduce many complex metrics. For instance: McCabe (cyclomatic complexity) Halstead Effort Others (like grammar impurity) A tool called “SdfMetz” is going to be developed to implement all this metrics for SDF.

    28. Problems found ISO Document has ambiguities in its specification Syntax Expressions: Apply v.s. RecordConstructor Apply v.s. IsDefTypeExpr EqualsDefinition CallStatement v.s. Expression Lexical Quotes are allowed in strings and in characters Using this methodology, we found some “interesting” problems in the ISO document. The ISO VDM-SL grammar has ambiguities in its specification. In Syntax three ambiguities were found: Apply v.s. RecordCostructor (This was solved by adding a terminal to the RecordConstructor rule to disambiguate it from the Apply expression) Apply v.s. IsDefTypeExpr (this was disambiguated in the same way that the above) CallStatement v.s. Expression (This was disambiguated by preferring an Expression to a CallStatement - although it was impossible to know if this was the right choice. In IFAD implementation only Expression is allowed, what makes me believe that this is a serious error in a ISO Document... In lexical only one ambiguity was found: Quotes are allowed in strings and characters without being properly escaped what causes ambiguities at the lexical level.Using this methodology, we found some “interesting” problems in the ISO document. The ISO VDM-SL grammar has ambiguities in its specification. In Syntax three ambiguities were found: Apply v.s. RecordCostructor (This was solved by adding a terminal to the RecordConstructor rule to disambiguate it from the Apply expression) Apply v.s. IsDefTypeExpr (this was disambiguated in the same way that the above) CallStatement v.s. Expression (This was disambiguated by preferring an Expression to a CallStatement - although it was impossible to know if this was the right choice. In IFAD implementation only Expression is allowed, what makes me believe that this is a serious error in a ISO Document... In lexical only one ambiguity was found: Quotes are allowed in strings and characters without being properly escaped what causes ambiguities at the lexical level.

    29. Future plans Short-term (VDM parser “clients”): VooDooM Formal methods projects MCZ Objectifier Camila revival? Long-term Topic open for discussion... Future plans or the actual VDM parser “clients”. The VooDooM is the official client of this project, because it was developed for it. But there a few other clients for it like, Formal methods projects, one that will use only the VDM parser and its Haskell interface to calculate the functions dependencies, and other that will use all VooDooM framework. MCZ Objectifier: MCZ stands for Miguel Cruz the name of the person that studied how to objectify functional specifications into object-oriented specifications (and its conversion from VDM-SL to VDM++) Camila revival, a formal language tool developed by University of Minho. Long-term plans is an open topic for discussion! :-)Future plans or the actual VDM parser “clients”. The VooDooM is the official client of this project, because it was developed for it. But there a few other clients for it like, Formal methods projects, one that will use only the VDM parser and its Haskell interface to calculate the functions dependencies, and other that will use all VooDooM framework. MCZ Objectifier: MCZ stands for Miguel Cruz the name of the person that studied how to objectify functional specifications into object-oriented specifications (and its conversion from VDM-SL to VDM++) Camila revival, a formal language tool developed by University of Minho. Long-term plans is an open topic for discussion! :-)

    30. What’s next? Test set completion (fill the rest 44%) Test generation Add examples manually Analyze the rules that were not covered Try to find pathologies Compute Grammar Metrics Test the methodology developing other grammars. So what’s next? Test set completion (it is necessary to have near 100% coverage), and this can be done by Test generation (that assumes that everything is correct) Add examples manually (time consuming task...) Analyze the rules that were not covered Before adding a test cases for a rule that was not covered, one must see why this happened. Probably grammars have pathologies that must be cured... Compute Grammar Metrics, this is planned, and the implementation of the SdfMetz tool will start shortly. Test the same methodology in the developing of other grammars. For instance, grammars developed from scratch or in grammar inferencing.So what’s next? Test set completion (it is necessary to have near 100% coverage), and this can be done by Test generation (that assumes that everything is correct) Add examples manually (time consuming task...) Analyze the rules that were not covered Before adding a test cases for a rule that was not covered, one must see why this happened. Probably grammars have pathologies that must be cured... Compute Grammar Metrics, this is planned, and the implementation of the SdfMetz tool will start shortly. Test the same methodology in the developing of other grammars. For instance, grammars developed from scratch or in grammar inferencing.

    31. Conclusion Work was completed in only 3 weeks A complete grammar of the ISO VDM-SL is for the first time public available (parser) A strong methodology for grammar developing was defined Grammar testing were put to practice Different types of tests Test coverage This work was completed in only 3 weeks. I took one more week than Ralf Lãmmel took for the Cobol grammar, but I didn’t have any anterior experience (or maybe I’m not so good, and I have to admit it) A complete grammar of the ISO VDM-SL is for the first time public available, and the parser for the language can be used without any restriction. A strong methodology baptized “Extreme Grammaring” was defined, and proved to be fundamental to the success of this work. Grammar testing were put to practice, in which different types of tests were defined and for the first time known, test coverage was measured.This work was completed in only 3 weeks. I took one more week than Ralf Lãmmel took for the Cobol grammar, but I didn’t have any anterior experience (or maybe I’m not so good, and I have to admit it) A complete grammar of the ISO VDM-SL is for the first time public available, and the parser for the language can be used without any restriction. A strong methodology baptized “Extreme Grammaring” was defined, and proved to be fundamental to the success of this work. Grammar testing were put to practice, in which different types of tests were defined and for the first time known, test coverage was measured.

    32. Thank you!

    33. Questions / Discussion

More Related